Abstract

Mask-based lensless imagers are smaller and lighter than traditional lensed cameras. In these imagers, the sensor does not directly record an image of the scene; rather, a computational algorithm reconstructs it. Typically, mask-based lensless imagers use a model-based reconstruction approach that suffers from long compute times and a heavy reliance on both system calibration and heuristically chosen denoisers. In this work, we address these limitations using a bounded-compute, trainable neural network to reconstruct the image. We leverage our knowledge of the physical system by unrolling a traditional model-based optimization algorithm, whose parameters we optimize using experimentally gathered ground-truth data. Optionally, images produced by the unrolled network are then fed into a jointly-trained denoiser. As compared to traditional methods, our architecture achieves better perceptual image quality and runs 20$\times$ faster, enabling interactive previewing of the scene. We explore a spectrum between model-based and deep learning methods, showing the benefits of using an intermediate approach. Finally, we test our network on images taken in the wild with a prototype mask-based camera, demonstrating that our network generalizes to natural images.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Mask-based lensless imagers (lensless imagers) are a class of computational cameras in which the lens is replaced with a phase or amplitude mask placed a short distance in front of the sensor (Fig. 1). Unlike conventional (lensed) cameras, which directly record an image, lensless cameras map each point in the scene to many sensor pixels, indirectly encoding scene information into the sensor measurement. A reconstruction algorithm is then used to recover the final image. This architecture enables small, cheap, and light-weight designs which can be used for portable or in vivo imaging [16]. Additionally, the inherent multiplexing of lensless cameras can make them amenable to compressive measurement of higher-dimensional signals, such as 3D volumetric [3,7] or video [8], from a single 2D measurement. Lensless cameras have been used for 3D fluorescence microscopy [4,9], thermal imaging [10], and refocusable photography [11].

 figure: Fig. 1.

Fig. 1. Overview of our imaging pipeline. During training, images are displayed on a computer screen and captured simultaneously with both a lensed and a lensless camera to form training pairs, with the lensed images serving as ground truth labels. The lensless measurements are fed into a model-based network which incorporates knowledge about the physics of the imager. The output of the network is compared with the labels using a loss function and the network parameters are updated through backpropagation. During operation, the lensless imager takes measurements and the trained model-based network is used to reconstruct the images, providing a large speedup in reconstruction time and an improvement in image quality.

Download Full Size | PPT Slide | PDF

Image reconstruction methods for lensless cameras fall into two general categories: single-step and iterative reconstructions. Single-step reconstructions can be fast, but often require custom fabricated masks that must be carefully aligned to the sensor [1,2,10,11]. In addition, it is difficult to incorporate priors and leverage compressed sensing in single-step reconstructions. Iterative reconstructions are much slower, but do not impose stringent restrictions on the mask itself, generally produce better results, and allow priors to be used [12,13]. However, due to imperfect system modeling, these methods may still give significant reconstruction artifacts. Additionally, the high complexity of the computation precludes interactive previewing of the scene and requires expensive, bulky compute hardware. In this work, we focus on iterative methods, improving both the image quality and speed with a new reconstruction framework that incorporates the advantages of both deep learning and physical models, making lensless cameras more practical for everyday imaging.

The classical approach to image recovery is to use convex optimization to iteratively minimize a loss function [14,15] consisting of a data-fidelity term and an optional hand-picked regularization term. The data-fidelity term enforces that the recovered image, with the known imaging model applied to it, matches the measurement. The regularization term enforces prior knowledge of image statistics (e.g. non-negative, sparse gradients) and serves to regularize ill-conditioned problems. Iterative approaches are interpretable, but are sensitive to reconstruction artifacts due to model mismatch, calibration errors, hand-tuned parameters, and hand-picked regularizers which are not necessarily representative of the data. Each of these contributes to reconstruction artifacts and degrades image quality. Furthermore, these methods can take hundreds to thousands of iterations to converge, which is often too slow for real-time imaging.

Recently, deep learning-based methods for image reconstruction have risen in popularity. In deep methods, a convolutional neural network (CNN) is used for image reconstruction [1618]. Networks have hundreds of thousands of parameters which are updated using large datasets of image pairs. These networks are able to learn complex scene statistics, but do not incorporate any prior knowledge about the image formation process. Compared to iterative methods, deep learning-based methods are hard to interpret, do not have convergence guarantees, and have no structured way to incorporate knowledge of the imaging system physics.

Unrolled optimization represents a middle-ground between classic and deep methods. In unrolled optimization, a fixed number of iterations from a classic algorithm is interpreted as a deep network, with each iteration serving as a layer in the network. In each layer, if the parameters of the algorithm are differentiable with respect to the output, they can be optimized for a given loss function through backpropagation. In this framework, the sparsifying filters, hyper-parameters, or shrinkage function can be learned from the training examples [19,20]. Unrolled optimization has shown promising results for image denoising [21,22], sparse coding [19], and MRI reconstructions [23].

Here, we unroll the iterative alternating direction method of multipliers (ADMM) algorithm with a variable splitting specific for lensless imaging [3,15]. This allows us to incorporate knowledge of the image formation process into the neural network as well as learn the network parameters based on the data. To train our network, we experimentally capture a large dataset of lensed and lensless images (Fig. 1). We train our network on a perceptual similarity metric in order to produce images that are visually similar to those from our ground truth lensed camera. We present several variations of networks along the spectrum between classic methods and deep methods, by varying the number of trainable parameters (Fig. 2). Specifically, we introduce three architectures, Le-ADMM, Le-ADMM*, and Le-ADMM-U, each with increasing numbers of trainable parameters, explained in detail in Sec. 4. All of our networks have a bounded compute that can be adjusted according to the application. The networks trade-off data fidelity and image perceptual quality, producing more visually appealing images at the price of decreased data-fidelity.

 figure: Fig. 2.

Fig. 2. Networks on a scale from classic to deep. We will present several networks specifically designed for lensless imaging (Le-ADMM, Le-ADMM*, and Le-ADMM-U). We compare these to classic approaches, which have no learnable parameters, and to purely deep methods which do not include any knowledge of the imaging model. We will show the utility of using an algorithm in this middle range compared to a purely classic or deep method. $\Theta$ summarizes the parameters that are learned for each network as discussed in Section 4.

Download Full Size | PPT Slide | PDF

We test our network using DiffuserCam [12] as our prototypical lensless camera, built with off-the-shelf components and a low-end camera sensor. Although our network is trained using images from a computer monitor, we demonstrate the generalization of our network to measurements of natural objects taken in the wild. We believe that this exploratory work shows the promise of using unrolled neural networks for lensless imaging, and our results suggest the utility of combining knowledge of the physics together with deep learning for the best performance.

Our contributions include:

  • 1. A bounded-time trainable network architecture that incorporates knowledge of the physical model for lensless imaging.
  • 2. An experimental dataset of 25,000 aligned lensed and lensless image pairs taken using a beamsplitter and computer screen.
  • 3. A demonstration of 20$\times$ speedup and 3$\times$ improvement in perceptual similarity for lensless imaging reconstructions on an experimental system.
  • 4. Generalization of the network to images taken in the wild on a prototype lensless camera.

2. Lensless imaging forward model

First, we describe our lensless imaging forward model for DiffuserCam. Based on this, we formulate our traditional model-based reconstruction (Sec. 3), before moving on to modifications that span the spectrum from model-based to deep learning-based algorithms (Fig. 2) in Sec. 4.

DiffuserCam [3,12] is a compact, easy-to-build imaging system that consists only of a diffuser (a transparent phase mask with pseudo-random slowly varying thickness) placed a few millimeters in front of a standard image sensor (see Fig. 1). Light from a point source in the scene is refracted by the diffuser to create a high-contrast caustic pattern on the sensor, which is the point spread function (PSF) of the system (Fig. 1). Since the diffuser is thin, the PSF can be modeled as shift-invariant: a lateral shift of the point source in the scene causes a translation of the PSF in the opposite direction. We model the scene as a collection of point sources with varying color and intensity. Assuming all points are incoherent with each other, the sensor measurement, $\mathbf {b}$, can be described as:

$$\begin{aligned} {\textbf {b}}(x,y) &= {\textrm{crop}} [ {\textbf {h}}(x,y) * {\textbf {x}}(x,y)]\\ &= {\textbf {C}} {\textbf {H}} {\textbf {x}} , \end{aligned}$$
where $\mathbf {h}$ is the system PSF, $\mathbf {x}$ represents the scene, and $(x,y)$ are the sensor coordinates. Here, $*$ denotes 2D discrete linear convolution, which returns an array that is larger than both the scene and the PSF. Therefore, a crop operation restricts the output to the physical sensor size. This relation is represented compactly in matrix-vector notation with ${\textrm {crop}}$ denoted as $\mathbf {C}$ and convolution with the PSF denoted as $\mathbf {H}$. Equation (1) is computed separately for each color channel.

Our goal is to recover the scene, $\mathbf {x}$, from the measurement $\mathbf {b}$. We assume the PSF is known, as it can easily be measured experimentally with an LED point source [3]. Traditional model-based methods for recovering $\mathbf {x}$ solve a regularized optimization problem of the following form:

$$\hat{\mathbf{x}} = \mathop{\textrm{arg min}}\limits_{\mathbf{x}\geq 0} \frac{1}{2} \| \mathbf{b} - \mathbf {C H} \mathbf{x} \|_2^{2} + \tau \| {\Psi} \mathbf{x}\|_1,$$
where $\Psi$ is a sparsifying transform, such as finite differences for total variation (TV) denoising, and $\tau$ is a tuning parameter that adjusts the sparsity level.

3. Model-based inverse algorithm

The traditional model-based inverse solver relies on the known physics of the forward model to solve Eq. (2), minimizing the difference between the actual and predicted measurements, while satisfying any additional constraints. This problem can be solved efficiently by ADMM [15] with a variable splitting that leverages the structure of the problem [3]. In ADMM, the problem is reformulated as:

$$\begin{aligned} \hat{\mathbf{x}} &= \mathop{\textrm{arg min}}\limits_{w\geq 0, u, v} \frac{1}{2} \| \mathbf{b} - \mathbf{C} v \|_2^{2} + \tau \| u\|_1,\\ &s.t.\,\, v= \mathbf{H} \mathbf{x}, u = \Psi \mathbf{x}, w = \mathbf{x}. \end{aligned}$$
This variable splitting allows closed-form updates for each step, as derived in [3]. The update equations in each iteration become:
$$\begin{aligned} u^{k+1}& \leftarrow \mathcal{T}_{\tau/\mu_2}(\boldsymbol \Psi \mathbf{x}^{k} + \alpha_2^{k}/\mu_2) &{\textrm{sparsifying soft-threshold}}\\ v ^{k+1}& \leftarrow (\mathbf{C}^{\mathbf{T}} \mathbf{C} + \mu_1I)^{{-}1}(\alpha_1^{k} + \mu_1 \mathbf{H} \mathbf{x}^{k} + \mathbf{C}^{\mathbf{T}} \mathbf{b}) &{\textrm{least-squares update}}\\ w^{k+1} &\leftarrow {\textrm{max}}(\alpha_3^{k}/\mu_3 + \mathbf{x}^{k}, 0) &{\textrm{enforce non-negativity}} \\ \mathbf{x}^{k+1} &\leftarrow (\mu_1 \mathbf{H}^{\mathbf{T}} \mathbf{H} + \mu_2 \boldsymbol \Psi^{\mathbf{T}} \boldsymbol\Psi + \mu_3I)^{{-}1} r^{k} &{\textrm{least-squares update}}\\ \alpha_1^{k+1} &\leftarrow \alpha_1^{k} + \mu_1(\mathbf{H} \mathbf{x}^{k+1} - v^{k+1}) &{\textrm{dual for }}v\\ \alpha_2^{k+1} &\leftarrow \alpha_2^{k} + \mu_2(\boldsymbol\Psi \mathbf{x}^{k+1} - u^{k+1}) &{\textrm{dual for }}u\\ \alpha_3^{k+1} &\leftarrow \alpha_3^{k} + \mu_3(\mathbf{x}^{k+1} - w^{k+1}) &{\textrm{dual for }}w\\ {\textrm{where }} r^{k} &= ((\mu_3w^{k+1}-\alpha_3^{k}) + \boldsymbol \Psi^{\mathbf{T}}(\mu_2 u^{k+1} - \alpha_2^{k}) + \mathbf{H}^{\mathbf{T}}(\mu_1 v^{k+1} - \alpha_1^{k})). \end{aligned}$$
Here, $\alpha _1$, $\alpha _2$, and $\alpha _3$ are the Lagrange multipliers, or dual variables, respectively associated with $u$, $v$, and $w$, and $\mu _1$, $\mu _2$, and $\mu _3$ are scalar penalty parameters. $\mathcal {T}_{\tau /\mu _2}$ denotes vectorial soft-thresholding with parameter $\tau /\mu _2$.

This traditional method is based on the physical model of the imaging system (Eq. (1)) and requires no additional calibration data beyond the PSF. However, it depends heavily on hand-chosen values, such as the sparsifying transform $\Psi$ and its associated parameter, $\tau$. The optimization parameters, $\mu _1$, $\mu _2$, and $\mu _3$, are either hand-tuned or auto-tuned based on the primal and dual residuals at each iteration [15]. The method performs well under correctly chosen sparsifying transforms and with the proper hand-tuned parameters. However, in practice ADMM takes hundreds of iterations to converge and produces images with reconstruction artifacts. In the next section, we will outline how we unroll ADMM into a neural network in order to learn the hyper-parameters from the data and seamlessly interface with existing deep learning pipelines.

4. Learned reconstruction networks

Next, we present several variations of neural networks that jointly incorporate known physical models and deep learning principles. Each network is based on unrolling the iterative ADMM algorithm, such that each iteration comprises a layer of the network, with the tunable parameters learned from the training data. Thus, the physical model is inherently built into the network architecture, making it more efficient.

We present three variations of networks, each having a different number of learned parameters. Learned ADMM (Le-ADMM) has trainable tuning and hyper-parameters. Le-ADMM* extends Le-ADMM by adding a trainable CNN instead of a hand-tuned sparsifying transform. Finally, Le-ADMM-U adds a trainable deep denoiser based on a CNN as the last layer of the Le-ADMM network, learning both the hyper-parameters of Le-ADMM as well as the denoiser. Figure 2 summarizes these methods and where they fall on a scale from classic to deep, and the following sections describe them in detail. Each method has progressively more trainable parameters, and therefore needs a larger training dataset. All networks use 5 iterations of unrolled ADMM, in order to target a 20$\times$ speed improvement, which would speed up each reconstruction from 1.5 s to 75 ms, giving a practical speed for real world imaging.

4.1 Learned ADMM (Le-ADMM)

In the simplest of our unrolled networks, Le-ADMM (learned ADMM), we model each $k^{\textrm {th}}$ iteration of ADMM as a layer in a neural network, outlined in Fig. 3. In Le-ADMM, the optional denoiser step depicted in Fig. 3 is omitted. We denote the collection of update equations at the $k^{\textrm {th}}$ step of ADMM as $\mathbf {S}^{k}$. These update equations are given by:

$$\begin{aligned} \mathbf S^{k+1} \leftarrow \begin{cases} u^{k+1} \leftarrow \mathcal{T}_{{\color{blue}{\fbox{$\color{black}{\displaystyle{\tau^{k}}}$}}}}( \boldsymbol\Psi(\mathbf{x}^{k}) + \alpha_2^{k}/{\color{blue}{\fbox{$\color{black}{\displaystyle{\mu_2^{k}}}$}}}) & \text{sparsifying soft-thresholding}\\ v ^{k+1} \leftarrow (\mathbf C^{\mathbf T} \mathbf C + \mu_1I)^{-1}(\alpha_1^{k} + {\color{blue}{\fbox{$\color{black}{\displaystyle{\mu_1^{k}}}$}}} \mathbf H \mathbf x^{k} + \mathbf C^{\mathbf T} \mathbf b) & \text{least-squares update}\\ w^{k+1} \leftarrow \text{max}(\alpha_3^{k}/ {\color{blue}{\fbox{$\color{black}{\displaystyle{\mu_3^{k}}}$}}} + \boldsymbol x^{k}, 0) & \text{enforce non-negativity} \\ \mathbf x^{k+1} \leftarrow ({\color{blue}{\fbox{$\color{black}{\displaystyle{\mu_1^{k}}}$}}} \mathbf H^{\mathbf T} \mathbf H + {\color{blue}{\fbox{$\color{black}{\displaystyle{\mu_2^{k}}}$}}} \boldsymbol\Psi^{\mathbf T} \boldsymbol\Psi + {\color{blue}{\fbox{$\color{black}{\displaystyle{\mu_3^{k}}}$}}}I)^{-1} r^{k} & \text{least-squares update} \\ \alpha_1^{k+1} \leftarrow \alpha_1^{k} + {\color{blue}{\fbox{$\color{black}{\displaystyle{\mu_1^{k}}}$}}}(\mathbf H \mathbf x^{k+1} - v^{k+1}) & \text{dual for v}\\ \alpha_2^{k+1} \leftarrow \alpha_2^{k} + {\color{blue}{\fbox{$\color{black}{\displaystyle{\mu_2^{k}}}$}}}( \boldsymbol\Psi(\mathbf x^{k+1}) - u^{k+1}) & \text{dual for u} \\ \alpha_3^{k+1} \leftarrow \alpha_3^{k} + {\color{blue}{\fbox{$\color{black}{\displaystyle{\mu_3^{k}}}$}}}(\mathbf x^{k+1} - w^{k+1}) & \text{dual for w} \\ \end{cases} \\ \text{where } r^{k} = (({\color{blue}{\fbox{$\color{black}{\displaystyle{\mu_3^{k}}}$}}}w^{k+1}-\alpha_3^{k}) + \boldsymbol\Psi^{\mathbf T}({\color{blue}{\fbox{$\color{black}{\displaystyle{\mu_2^{k}}}$}}} u^{k+1} - \alpha_2^{k}) + \mathbf H^{\mathbf T}({\color{blue}{\fbox{$\color{black}{\displaystyle{\mu_1^{k}}}$}}} v^{k+1} - \alpha_1^{k})).\end{aligned}$$

 figure: Fig. 3.

Fig. 3. Model-based network architecture. The input measurement and the calibration PSF are first fed into N layers of unrolled Le-ADMM. At each layer, the updates corresponding to $\mathbf {S}^{k+1}$ in Eq. (4) are applied. The output of this can be fed into an optional denoiser network. The network parameters are updated based on a loss function comparing the output image to the lensed image. Red arrows represent backpropagation through the network parameters.

Download Full Size | PPT Slide | PDF

The trainable parameters are outlined in blue and can be summarized by $\Theta = \{\mu _1^{k}, \mu _2^{k}, \mu _3^{k}, \tau ^{k} \}$, where $k$ represents the iteration number. For 5 unrolled layers, we have a total of 20 learned parameters. After a fixed number of ADMM iterations the reconstruction is compared to the ground truth (lensed) image using the loss function described in Section 4.5. The trainable parameters are updated using backpropagation to minimize this loss across multiple training examples. Le-ADMM can be interpreted as a data-tuned ADMM where the parameters that are typically hand-tuned or auto-tuned are now updated based on the data in order to minimize a data-driven loss function.

4.2 Le-ADMM*, with learned regularizer

Le-ADMM* has the same overall structure as Le-ADMM, but also includes a learnable regularizer based on a CNN. The new update steps are summarized below:

$$\begin{aligned}\mathbf{S}^{k+1} \leftarrow \begin{cases} u^{k+1} \leftarrow {\color{blue}{\fbox{$\color{black}{\displaystyle{\mathcal{N}}}$}}}(\mathbf x^{k}) & \text{network regularizer}\\ v ^{k+1} \leftarrow (\mathbf C^{\mathbf T} \mathbf C + \mu_1I)^{-1}(\alpha_1^{k} + {\color{blue}{\fbox{$\color{black}{\displaystyle{\mu_1^{k}}}$}}} \mathbf H \mathbf x^{k} + \mathbf C^{\mathbf T} \mathbf b) & \text{least-squares update}\\ w^{k+1} \leftarrow \text{max}(\alpha_3^{k}/ {\color{blue}{\fbox{$\color{black}{\displaystyle{\mu_3^{k}}}$}}} + \boldsymbol x^{k}, 0) & \text{enforce non-negativity} \\ \mathbf x^{k+1} \leftarrow ({\color{blue}{\fbox{$\color{black}{\displaystyle{\mu_1^{k}}}$}}} \mathbf H^{\mathbf T} \mathbf H + {\color{blue}{\fbox{$\color{black}{\displaystyle{\mu_2^{k}}}$}}} I + {\color{blue}{\fbox{$\color{black}{\displaystyle{\mu_3^{k}}}$}}}I)^{-1} r^{k} & \text{least-squares update} \\ \alpha_1^{k+1} \leftarrow \alpha_1^{k} + {\color{blue}{\fbox{$\color{black}{\displaystyle{\mu_1^{k}}}$}}}(\mathbf H \mathbf x^{k+1} - v^{k+1}) & \text{dual for v}\\ \alpha_3^{k+1} \leftarrow \alpha_3^{k} + {\color{blue}{\fbox{$\color{black}{\displaystyle{\mu_3^{k}}}$}}}(\mathbf x^{k+1} - w^{k+1}) & \text{dual for w} \\ \end{cases} \\ \text{where } r^{k} = (({\color{blue}{\fbox{$\color{black}{\displaystyle{\mu_3^{k}}}$}}}w^{k+1}-\alpha_3^{k}) + {\color{blue}{\fbox{$\color{black}{\displaystyle{\mu_2^{k}}}$}}} u^{k+1} + \mathbf H^{\mathbf T}({\color{blue}{\fbox{$\color{black}{\displaystyle{\mu_1^{k}}}$}}} v^{k+1} - \alpha_1^{k})).\end{aligned}$$
${\mathcal {N}}$ represents a learnable network, applied at each ADMM iteration. For this learnable transform, we use a small U-Net based on [24] consisting of a single encoding and decoding step (complete architecture is available in the Appendix). Because ${\mathcal {N}}$ does not represent the solution to a well-defined convex problem, we drop $\alpha _2$, the dual variable associated with $u$. The learnable parameters for Le-ADMM* are thus given by ${\Theta = \{\mu _1^{k}, \mu _2^{k}, \mu _3^{k}, \mathcal {N} \}}$, which is a total of 32,135 learned parameters. The added learned parameters allow Le-ADMM* to learn a better prior on the data and account for model-mismatch errors in the forward model, at the price of requiring additional training data.

4.3 Le-ADMM-U

Our third variation of unrolled networks is Le-ADMM followed by a learned denoiser, as shown in Fig. 3. Here, a U-Net is used as the denoiser [24]. This method has the most learnable parameters, having a total of 10,605,927 learned parameters, all but 20 of which are from the U-Net. The parameters of Le-ADMM-U, given by $\Theta = \{\mu _1^{k}, \mu _2^{k}, \mu _3^{k}, \tau ^{k}, \mathcal {U} \}$, are jointly updated throughout training. The Le-ADMM portion of the network performs the bulk of the deconvolution and includes knowledge of the forward model, while the U-Net denoises the final image, is able to correct model mismatch errors, and makes the images look more visually appealing. Our denoiser network architecture is described in the Appendix.

4.4 U-Net

For completeness, we also compare to a purely deep method with no knowledge of the system physics in the reconstruction. For this, we directly use the U-Net architecture from [24], resulting in 10,605,907 learned parameters. We summarize this network architecture in the Appendix.

4.5 Loss functions

The loss function must be carefully selected because it dictates the parameter updates throughout the training process. In classic methods, ground truth is unavailable, so the loss is a function of the consistency of the final image, $\hat {\mathbf {x}}$, with the measurement model and any image priors. With the inclusion of ground truth training data pairs, we now have access to another class of loss functions that directly compare a given reconstructed image to its associated ground truth image, $\mathbf {x}_{gt}$. One common loss is the mean-squared error (MSE) loss with respect to the ground truth, $\|\mathbf {x}_{gt} - \hat {\mathbf {x}}\|^{2}_2$. However, MSE favors low frequencies and generally results in learned reconstructions that are blurry and lack detail [25]. Here, we will use the Learned Perceptual Image Patch Similarity metric (LPIPS) that uses deep features and aims to quantify a perceptual distance between two images, as introduced in [25]. During training, we use a combination of both MSE and LPIPS, as outlined in Section 5. These loss functions are summarized in Table 1.

5. Implementation

For training, we simultaneously collect a set of lensless and ground truth image pairs using an experimental setup consisting of a lensed camera, a DiffuserCam, a beamsplitter, and computer monitor (Fig. 1). The cameras and computer monitor are simultaneously triggered, which allows us to display and capture all the training pairs in the dataset overnight.

Our DiffuserCam prototype consists of an off-the-shelf diffuser (Luminit 0.5$^{\circ }$) with a laser-cut paper aperture placed approximately 9 mm from a CMOS sensor. The lensed camera is focused at the plane of the computer screen, approximately 10 cm away. We capture a calibration PSF (see Fig. 1) using an LED point source placed at the distance of the computer screen, which sets the focal plane of the DiffuserCam. For both DiffuserCam and the ground truth camera, we use Basler Dart (daA1920-30uc) sensors. We use a 6 mm S-mount lens for the ground truth camera and calibrate the lens distortion using OpenCV’s undistort camera calibration procedure [26]. To achieve pixel-wise alignment between the image pairs, we first optically align the two cameras, then further calibrate by displaying a series of points on the computer monitor that span the field-of-view. We reconstruct these point images and compute the homography transform needed to co-align both cameras’ coordinate systems. This transform is applied to all subsequent images.

Our dataset consists of 25,000 images from the MirFlickr dataset [27]. The raw data from each sensor is 1920$\times$1080 pixels, but is down-sampled by a factor of 4 in each direction, to 480$\times$270. This is necessary due to moiré fringes from the screen which degrade our lensed image quality. We split the dataset up into 24,000 training images and 1,000 test images. Our networks are implemented in PyTorch and trained on a Titan X GPU, using an ADAM optimizer throughout training [28]. We find that using a combined loss based on MSE and LPIPS works best in practice. We weight MSE more heavily during earlier epochs and weight LPIPS more heavily during later epochs for further refinement. Source code is available at [29]. When displaying the final images, we crop to 380$\times$210 pixels to avoid displaying areas beyond the borders of the computer monitor.

6. Results

After training, we compare the performance of our unrolled networks against both classic ADMM and the fully deep U-Net. Since the number of iterations of ADMM affects both speed and quality of the result, we compare against both ADMM run until convergence (100 iterations) as well as ADMM bounded to 5 iterations. Bounded ADMM takes a similar time to run as our unrolled networks and converged ADMM sets a baseline for the best performance classic algorithms can achieve. On the deep side, we compare against a U-Net which is trained using our raw DiffuserCam measurements and ground truth labels.

The reconstruction results of images in our test set (taken by the monitor setup, but not used during training) show that our fastest learned networks are able to produce similar or better images than converged ADMM in the same amount of time as bounded ADMM (5 iterations), a 20$\times$ speedup while achieving comparable or better image quality. Furthermore, we show reconstructions of natural images in the wild (not from a computer monitor), demonstrating that our networks are able to generalize to 3D objects with variable lighting conditions.

6.1 Test set results

Table 2 summarizes the reconstruction performance and speed of our learned networks on the test set. Here we can see that our fastest networks (Le-ADMM and Le-ADMM-U) are 20$\times$ faster than classic reconstruction algorithms (ADMM converged) and have similar or better average MSE and LPIPS scores. Le-ADMM* is slightly slower due to its inclusion of a CNN on the uncropped image in each unrolled layer, however is still an order of magnitude faster than converged ADMM. As we move on the scale from classic to deep (Le-ADMM $\rightarrow$ Le-ADMM* $\rightarrow$ Le-ADMM-U), our networks have better MSE and LPIPS scores, but have worse data fidelity.

Tables Icon

Table 2. Network performance on test set

Figure 4 shows several sample images from our test set reconstructions. Here we can see that our networks (Le-ADMM, Le-ADMM*, Le-ADMM-U) produce images that are of equal or better quality than converged ADMM. We can see that bounded ADMM has streaky artifacts, but our learned networks do not. Le-ADMM-U has the best reconstruction performance overall and produces images that are visually similar to the ground truth images. Overall, Le-ADMM-U has 3$\times$ better image quality than converged ADMM as measured by the LPIPS metric. The U-Net does not perform as well as Le-ADMM-U, having inconsistent colors and missing higher frequencies. This shows the utility in combining model-based and deep methods.

 figure: Fig. 4.

Fig. 4. Test set results, with the raw DiffuserCam measurement (contrast stretched) and the ground truth images from the lensed camera for reference. Le-ADMM (71 ms) has similar image quality to converged ADMM (1.5 s) and better image quality than bounded ADMM (71 ms). Le-ADMM* and Le-ADMM-U have noticeably better visual image quality. The U-Net by itself is unable to reconstruct the appropriate colors and lacks detail.

Download Full Size | PPT Slide | PDF

Figure 5(a), plots the distribution of MSE, LPIPS, and Data Fidelity scores for the test set. We can see that Le-ADMM-U has the best LPIPS and MSE scores and outperforms converged ADMM, whereas Le-ADMM has similar LPIPS and MSE scores to converged ADMM with many fewer training pairs. Here we can clearly see the trend of data fidelity increasing as MSE and LPIPS decrease, showing that there is a trade-off between image quality and matching the imaging model. We interpret this as our system model being imperfect, which prevents purely model-based algorithms from achieving the best image quality. As we increase the number of learned parameters, we are able to correct artifacts introduced by model mismatch, producing more visually appealing images that better match the lensed camera. Figure 5(b) analyzes what happens to the reconstruction throughout the layers of the learned network. The MSE and LPIPS scores tend to decrease with each iteration, while data fidelity increases. For Le-ADMM-U, the U-Net greatly improves the LPIPS and MSE values at the cost of data fidelity.

 figure: Fig. 5.

Fig. 5. Network performance on test set. (a) Here we plot the MSE, LPIPS, and Data Fidelity values for all image pairs in our test set. On average, our learned networks (green) are more similar to the ground truth lensed images (lower MSE and LPIPS) than 5 iterations of ADMM. Furthermore, our networks have comparable performance to ADMM (100), which takes 20$\times$ longer than Le-ADMM and Le-ADMM-U. However, the data fidelity term is higher for the learned methods, indicating that these reconstructions are less consistent with the image formation model. (b) Here we plot performance after each layer (or equivalently, each ADMM iteration) in our network, showing that MSE and LPIPS generally decrease throughout the layers. The U-Net denoiser layer in Le-ADMM-U significantly decreases the LPIPS and MSE values, at the cost of data fidelity.

Download Full Size | PPT Slide | PDF

6.2 Generalization to images in the wild

Next, we remove the computer monitor and capture DiffuserCam images of natural objects. Figure 6 shows some example reconstructions using our learned networks. Again, we see that our networks produce images of similar or higher visual quality than converged ADMM. In particular, Le-ADMM-U again produces the most visually appealing images and has better image quality than converged ADMM. This shows that our learned networks are able to generalize beyond imaging a computer monitor to situations with dramatically different lighting conditions.

 figure: Fig. 6.

Fig. 6. Network performance on objects in the wild (toys and a plant) captured with our lensless camera. We show the raw measurement (contrast stretched) on the top row, followed by converged ADMM, ADMM bounded to 5 iterations, our learned networks, and U-Net for comparison. Our learned networks have similar or better image quality as converged ADMM, and Le-ADMM-U has the best image quality. For instance, Le-ADMM-U is able to capture the details in the sideways plant (second column from left) and the eye of the toy duck (right). The U-Net alone has good image quality, but is missing some colors and details (e.g. the first image is washed out and the nose of the alligator toy is miscolored).

Download Full Size | PPT Slide | PDF

7. Discussion

Our work presents a preliminary analysis of using unrolled, model-based neural networks on a real experimental lensless imaging system. We show that it is favorable to choose a network that combines classic and deep methods. We can perform comparably to classic algorithms at a fraction of the speed using only a few learned parameters, and we can greatly improve image quality when increasing the number of learned parameters. In addition, the number of learned parameters in the network could be varied depending on the application. For instance, scientific imaging applications might choose to have fewer learned parameters to prevent overfitting to the training data. Meanwhile, photography applications may prefer a deeper method with more parameters, potentially producing more visually appealing images at the expense of possibly hallucinating details not present in the scene.

The quality and resolution of our reconstructions is bounded by that of our training dataset, including any potential imperfections in the physical system. For instance, any aberrations introduced by our lensed camera or beamsplitter will affect the learned reconstructions, since the lensed images are used as the ground truth when updating the network parameters. However, in practice we correct for aberrations such as distortion before training; other effects (e.g. chromatic aberration, field curvature) are negligible at our reconstruction grid size. Possible future work includes training on scenes with larger depth content to yield reconstructions with desirable defocus blurs, such as seen in a lensed camera.

8. Conclusion

We presented several unrolled, model-based neural networks for lensless imaging with a varying number of trainable parameters. Our networks jointly incorporate the physics of the imaging model as well as learned parameters in order to use both the known physics and the power of deep learning. We presented an experimental system with a prototype lensless camera that was used to rapidly acquire a dataset of aligned lensless and lensed images for training. Each of our networks are able to produce similar or better image quality compared to standard algorithms, with the fastest offering a 20$\times$ improvement. In addition, our deeper method, Le-ADMM-U has 3$\times$ better image quality than standard algorithms on the LPIPS perceptual similarity scale. Our learned network is fast enough for interactive previewing of the scene and also produces visually appealing images, addressing two of the big limitations of lensless imagers. Our work suggests that using such model-based neural networks could greatly improve imaging speed and quality for lensless imaging at the cost of a training step before camera operation.

Appendix

Network architecture

We outline our U-Net network architecture (used for Le-ADMM-U as well as for the U-Net comparison) below in Table 3 for completeness. This is based on the architecture specified in [24]. Here k represents the kernel size, s is the stride, channels in/out represents the number of input and output channels for the layer, and input is the input of the layer, with ‘,’ representing concatenation. Here the encoding steps, enc, consist of two convolutional layers, each of which consists of a 2D convolution, followed by a batch-norm and ReLu. The decoding steps, dec, consist of three convolutional layers with the same architecture. Here, up$(\cdot )$ stands for bilinear upsampling. conv1 consists of a convolutional layer, batch-norm, and ReLu, whereas conv2 consists only of a convolutional layer.

Tables Icon

Table 3. Network architecture for U-Net used in Le-ADMM-U

Next, we outline our smaller U-Net that is used for Le-ADMM* in Table 4. The encoding and decoding steps are the same as described in Table 3. Finally, we include a skip connection, adding the input of the network to the output. The network architecture is described as follows:

Tables Icon

Table 4. Network architecture for U-Net used in Le-ADMM*.

Effect of training size

In Fig. 7 we study the effect of the number of training images on the network performance. We show that our model-based network, Le-ADMM-U, is able to perform much better than the deep method (U-Net) with fewer training images because it incorporates knowledge of the imaging system into the network.

 figure: Fig. 7.

Fig. 7. Effect of Training Size. Here we vary the number of images in the training set and plot the LPIPS score after 5 epochs. Here we see that Le-ADMM-U performs better and converges faster than a U-Net alone. Le-ADMM does not improve as the number of training images increases, since it has so few parameters.

Download Full Size | PPT Slide | PDF

Funding

National Science Foundation (DGE 1752814, DMR 1548924); Gordon and Betty Moore Foundation (GBMF4562).

Acknowledgments

Kristina Monakhova and Kyrollos Yanny acknowledge support from the NSF Graduate Research Fellowship Program. Grace Kuo is a National Defense Science and Engineering Graduate Fellow. The authors thank Ben Mildenhall for helpful discussions.

References

1. M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “FlatCam: Replacing lenses with masks and computation,” in Computer Vision Workshop (ICCVW), 2015 IEEE International Conference on, (IEEE, 2015), pp. 663–666.

2. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics: concept and experimental verification,” Appl. Opt. 40(11), 1806–1813 (2001). [CrossRef]  

3. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5(1), 1–9 (2018). [CrossRef]  

4. G. Kuo, N. Antipa, R. Ng, and L. Waller, “3D Fluorescence Microscopy with DiffuserCam,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2018), pp. CM3E–4.

5. K. Yanny, N. Antipa, R. Ng, and L. Waller, “Miniature 3D fluorescence microscope using random microlenses,” in Optics and the Brain, (Optical Society of America, 2019), pp. BT3A–4.

6. F. L. Liu, V. Madhavan, N. Antipa, G. Kuo, S. Kato, and L. Waller, “Single-shot 3D fluorescence microscopy with Fourier DiffuserCam,” in Novel Techniques in Microscopy, (Optical Society of America, 2019), pp. NS2B–3.

7. R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-Dimensional information acquisition using a compound imaging system,” Opt. Rev. 14(5), 347–350 (2007). [CrossRef]  

8. N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from Stills: Lensless imaging with rolling shutter,” arXiv preprint arXiv:1905.13221 (2019).

9. J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope,” Sci. Adv. 3(12), e1701548 (2017). [CrossRef]  

10. P. R. Gill, J. Tringali, A. Schneider, S. Kabir, D. G. Stork, E. Erickson, and M. Kellam, “Thermal Escher Sensors: Pixel-efficient lensless imagers based on tiled optics,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–3.

11. K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased fresnel zone aperture,” in 2017 IEEE International Conference on Computational Photography (ICCP), (2017), pp. 76–82.

12. G. Kuo, N. Antipa, R. Ng, and L. Waller, “DiffuserCam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–2.

13. D. G. Stork and P. R. Gill, “Optical, mathematical, and computational foundations of lensless ultra-miniature diffractive imagers and sensors,” Int. J. on Adv. Syst. Meas. 7, 4 (2014).

14. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci. 2(1), 183–202 (2009). [CrossRef]  

15. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations Trends Mach. Learning 3(1), 1–122 (2010). [CrossRef]  

16. S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5(7), 803–813 (2018). [CrossRef]  

17. Y. Li, Y. Xue, and L. Tian, “Deep speckle correlation: a deep learning approach toward scalable imaging through scattering media,” Optica 5(10), 1181–1190 (2018). [CrossRef]  

18. T. Nguyen, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “Deep learning approach for Fourier ptychography microscopy,” Opt. Express 26(20), 26470–26484 (2018). [CrossRef]  

19. K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” in Proceedings of the 27th International Conference on International Conference on Machine Learning, (Omnipress, 2010), pp.399–406.

20. U. Schmidt and S. Roth, “Shrinkage fields for effective image restoration,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2014), pp. 2774–2781.

21. S. Diamond, V. Sitzmann, S. Boyd, G. Wetzstein, and F. Heide, “Dirty pixels: Optimizing image classification architectures for raw sensor data,” arXiv preprint arXiv:1701.06487 (2017).

22. S. Diamond, V. Sitzmann, F. Heide, and G. Wetzstein, “Unrolled optimization with deep priors,” arXiv preprint arXiv:1705.08041 (2017).

23. J. Sun, H. Li, and Z. Xu, “Deep ADMM-Net for compressive sensing MRI,” Advances in neural information processing systems, (2016), pp. 10–18.

24. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.

25. R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), pp. 586–595.

26. G. Bradski, “The OpenCV Library,” Dr. Dobb’s Journal of Software Tools (2000).

27. M. J. Huiskes and M. S. Lew, “The MIR Flickr Retrieval Evaluation,” in MIR ’08: Proceedings of the 2008 ACM International Conference on Multimedia Information Retrieval, (ACM, New York, NY, USA, 2008.

28. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

29. K. Monakhova, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, and L. Waller, “Lensless learning repository,” https://github.com/Waller-Lab/LenslessLearning/, (2019). Accessed: 2019-09-05.

References

  • View by:

  1. M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “FlatCam: Replacing lenses with masks and computation,” in Computer Vision Workshop (ICCVW), 2015 IEEE International Conference on, (IEEE, 2015), pp. 663–666.
  2. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics: concept and experimental verification,” Appl. Opt. 40(11), 1806–1813 (2001).
    [Crossref]
  3. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5(1), 1–9 (2018).
    [Crossref]
  4. G. Kuo, N. Antipa, R. Ng, and L. Waller, “3D Fluorescence Microscopy with DiffuserCam,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2018), pp. CM3E–4.
  5. K. Yanny, N. Antipa, R. Ng, and L. Waller, “Miniature 3D fluorescence microscope using random microlenses,” in Optics and the Brain, (Optical Society of America, 2019), pp. BT3A–4.
  6. F. L. Liu, V. Madhavan, N. Antipa, G. Kuo, S. Kato, and L. Waller, “Single-shot 3D fluorescence microscopy with Fourier DiffuserCam,” in Novel Techniques in Microscopy, (Optical Society of America, 2019), pp. NS2B–3.
  7. R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-Dimensional information acquisition using a compound imaging system,” Opt. Rev. 14(5), 347–350 (2007).
    [Crossref]
  8. N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from Stills: Lensless imaging with rolling shutter,” arXiv preprint arXiv:1905.13221 (2019).
  9. J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope,” Sci. Adv. 3(12), e1701548 (2017).
    [Crossref]
  10. P. R. Gill, J. Tringali, A. Schneider, S. Kabir, D. G. Stork, E. Erickson, and M. Kellam, “Thermal Escher Sensors: Pixel-efficient lensless imagers based on tiled optics,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–3.
  11. K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased fresnel zone aperture,” in 2017 IEEE International Conference on Computational Photography (ICCP), (2017), pp. 76–82.
  12. G. Kuo, N. Antipa, R. Ng, and L. Waller, “DiffuserCam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–2.
  13. D. G. Stork and P. R. Gill, “Optical, mathematical, and computational foundations of lensless ultra-miniature diffractive imagers and sensors,” Int. J. on Adv. Syst. Meas. 7, 4 (2014).
  14. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci. 2(1), 183–202 (2009).
    [Crossref]
  15. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations Trends Mach. Learning 3(1), 1–122 (2010).
    [Crossref]
  16. S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5(7), 803–813 (2018).
    [Crossref]
  17. Y. Li, Y. Xue, and L. Tian, “Deep speckle correlation: a deep learning approach toward scalable imaging through scattering media,” Optica 5(10), 1181–1190 (2018).
    [Crossref]
  18. T. Nguyen, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “Deep learning approach for Fourier ptychography microscopy,” Opt. Express 26(20), 26470–26484 (2018).
    [Crossref]
  19. K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” in Proceedings of the 27th International Conference on International Conference on Machine Learning, (Omnipress, 2010), pp.399–406.
  20. U. Schmidt and S. Roth, “Shrinkage fields for effective image restoration,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2014), pp. 2774–2781.
  21. S. Diamond, V. Sitzmann, S. Boyd, G. Wetzstein, and F. Heide, “Dirty pixels: Optimizing image classification architectures for raw sensor data,” arXiv preprint arXiv:1701.06487 (2017).
  22. S. Diamond, V. Sitzmann, F. Heide, and G. Wetzstein, “Unrolled optimization with deep priors,” arXiv preprint arXiv:1705.08041 (2017).
  23. J. Sun, H. Li, and Z. Xu, “Deep ADMM-Net for compressive sensing MRI,” Advances in neural information processing systems, (2016), pp. 10–18.
  24. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.
  25. R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), pp. 586–595.
  26. G. Bradski, “The OpenCV Library,” Dr. Dobb’s Journal of Software Tools (2000).
  27. M. J. Huiskes and M. S. Lew, “The MIR Flickr Retrieval Evaluation,” in MIR ’08: Proceedings of the 2008 ACM International Conference on Multimedia Information Retrieval, (ACM, New York, NY, USA, 2008.
  28. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).
  29. K. Monakhova, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, and L. Waller, “Lensless learning repository,” https://github.com/Waller-Lab/LenslessLearning/ , (2019). Accessed: 2019-09-05.

2018 (4)

2017 (1)

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

2014 (1)

D. G. Stork and P. R. Gill, “Optical, mathematical, and computational foundations of lensless ultra-miniature diffractive imagers and sensors,” Int. J. on Adv. Syst. Meas. 7, 4 (2014).

2010 (1)

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations Trends Mach. Learning 3(1), 1–122 (2010).
[Crossref]

2009 (1)

A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci. 2(1), 183–202 (2009).
[Crossref]

2007 (1)

R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-Dimensional information acquisition using a compound imaging system,” Opt. Rev. 14(5), 347–350 (2007).
[Crossref]

2001 (1)

Adams, J. K.

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

Antipa, N.

N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5(1), 1–9 (2018).
[Crossref]

G. Kuo, N. Antipa, R. Ng, and L. Waller, “3D Fluorescence Microscopy with DiffuserCam,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2018), pp. CM3E–4.

K. Yanny, N. Antipa, R. Ng, and L. Waller, “Miniature 3D fluorescence microscope using random microlenses,” in Optics and the Brain, (Optical Society of America, 2019), pp. BT3A–4.

F. L. Liu, V. Madhavan, N. Antipa, G. Kuo, S. Kato, and L. Waller, “Single-shot 3D fluorescence microscopy with Fourier DiffuserCam,” in Novel Techniques in Microscopy, (Optical Society of America, 2019), pp. NS2B–3.

N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from Stills: Lensless imaging with rolling shutter,” arXiv preprint arXiv:1905.13221 (2019).

G. Kuo, N. Antipa, R. Ng, and L. Waller, “DiffuserCam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–2.

K. Monakhova, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, and L. Waller, “Lensless learning repository,” https://github.com/Waller-Lab/LenslessLearning/ , (2019). Accessed: 2019-09-05.

Asif, M. S.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “FlatCam: Replacing lenses with masks and computation,” in Computer Vision Workshop (ICCVW), 2015 IEEE International Conference on, (IEEE, 2015), pp. 663–666.

Avants, B. W.

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

Ayremlou, A.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “FlatCam: Replacing lenses with masks and computation,” in Computer Vision Workshop (ICCVW), 2015 IEEE International Conference on, (IEEE, 2015), pp. 663–666.

Ba, J.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

Baraniuk, R.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “FlatCam: Replacing lenses with masks and computation,” in Computer Vision Workshop (ICCVW), 2015 IEEE International Conference on, (IEEE, 2015), pp. 663–666.

Baraniuk, R. G.

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

Barbastathis, G.

Beck, A.

A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci. 2(1), 183–202 (2009).
[Crossref]

Boominathan, V.

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

Bostan, E.

N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5(1), 1–9 (2018).
[Crossref]

N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from Stills: Lensless imaging with rolling shutter,” arXiv preprint arXiv:1905.13221 (2019).

Boyd, S.

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations Trends Mach. Learning 3(1), 1–122 (2010).
[Crossref]

S. Diamond, V. Sitzmann, S. Boyd, G. Wetzstein, and F. Heide, “Dirty pixels: Optimizing image classification architectures for raw sensor data,” arXiv preprint arXiv:1701.06487 (2017).

Bradski, G.

G. Bradski, “The OpenCV Library,” Dr. Dobb’s Journal of Software Tools (2000).

Brox, T.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.

Chu, E.

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations Trends Mach. Learning 3(1), 1–122 (2010).
[Crossref]

Deng, M.

Diamond, S.

S. Diamond, V. Sitzmann, F. Heide, and G. Wetzstein, “Unrolled optimization with deep priors,” arXiv preprint arXiv:1705.08041 (2017).

S. Diamond, V. Sitzmann, S. Boyd, G. Wetzstein, and F. Heide, “Dirty pixels: Optimizing image classification architectures for raw sensor data,” arXiv preprint arXiv:1701.06487 (2017).

Eckstein, J.

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations Trends Mach. Learning 3(1), 1–122 (2010).
[Crossref]

Efros, A. A.

R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), pp. 586–595.

Erickson, E.

P. R. Gill, J. Tringali, A. Schneider, S. Kabir, D. G. Stork, E. Erickson, and M. Kellam, “Thermal Escher Sensors: Pixel-efficient lensless imagers based on tiled optics,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–3.

Fischer, P.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.

Gill, P. R.

D. G. Stork and P. R. Gill, “Optical, mathematical, and computational foundations of lensless ultra-miniature diffractive imagers and sensors,” Int. J. on Adv. Syst. Meas. 7, 4 (2014).

P. R. Gill, J. Tringali, A. Schneider, S. Kabir, D. G. Stork, E. Erickson, and M. Kellam, “Thermal Escher Sensors: Pixel-efficient lensless imagers based on tiled optics,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–3.

Gregor, K.

K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” in Proceedings of the 27th International Conference on International Conference on Machine Learning, (Omnipress, 2010), pp.399–406.

Heckel, R.

Heide, F.

S. Diamond, V. Sitzmann, F. Heide, and G. Wetzstein, “Unrolled optimization with deep priors,” arXiv preprint arXiv:1705.08041 (2017).

S. Diamond, V. Sitzmann, S. Boyd, G. Wetzstein, and F. Heide, “Dirty pixels: Optimizing image classification architectures for raw sensor data,” arXiv preprint arXiv:1701.06487 (2017).

Horisaki, R.

R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-Dimensional information acquisition using a compound imaging system,” Opt. Rev. 14(5), 347–350 (2007).
[Crossref]

Hoshizawa, T.

K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased fresnel zone aperture,” in 2017 IEEE International Conference on Computational Photography (ICCP), (2017), pp. 76–82.

Huiskes, M. J.

M. J. Huiskes and M. S. Lew, “The MIR Flickr Retrieval Evaluation,” in MIR ’08: Proceedings of the 2008 ACM International Conference on Multimedia Information Retrieval, (ACM, New York, NY, USA, 2008.

Ichioka, Y.

Irie, S.

R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-Dimensional information acquisition using a compound imaging system,” Opt. Rev. 14(5), 347–350 (2007).
[Crossref]

Ishida, K.

Isola, P.

R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), pp. 586–595.

Kabir, S.

P. R. Gill, J. Tringali, A. Schneider, S. Kabir, D. G. Stork, E. Erickson, and M. Kellam, “Thermal Escher Sensors: Pixel-efficient lensless imagers based on tiled optics,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–3.

Kato, S.

F. L. Liu, V. Madhavan, N. Antipa, G. Kuo, S. Kato, and L. Waller, “Single-shot 3D fluorescence microscopy with Fourier DiffuserCam,” in Novel Techniques in Microscopy, (Optical Society of America, 2019), pp. NS2B–3.

Kellam, M.

P. R. Gill, J. Tringali, A. Schneider, S. Kabir, D. G. Stork, E. Erickson, and M. Kellam, “Thermal Escher Sensors: Pixel-efficient lensless imagers based on tiled optics,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–3.

Kingma, D. P.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

Kondou, N.

Kumagai, T.

Kuo, G.

N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5(1), 1–9 (2018).
[Crossref]

F. L. Liu, V. Madhavan, N. Antipa, G. Kuo, S. Kato, and L. Waller, “Single-shot 3D fluorescence microscopy with Fourier DiffuserCam,” in Novel Techniques in Microscopy, (Optical Society of America, 2019), pp. NS2B–3.

G. Kuo, N. Antipa, R. Ng, and L. Waller, “DiffuserCam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–2.

G. Kuo, N. Antipa, R. Ng, and L. Waller, “3D Fluorescence Microscopy with DiffuserCam,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2018), pp. CM3E–4.

K. Monakhova, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, and L. Waller, “Lensless learning repository,” https://github.com/Waller-Lab/LenslessLearning/ , (2019). Accessed: 2019-09-05.

LeCun, Y.

K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” in Proceedings of the 27th International Conference on International Conference on Machine Learning, (Omnipress, 2010), pp.399–406.

Lee, J.

Lew, M. S.

M. J. Huiskes and M. S. Lew, “The MIR Flickr Retrieval Evaluation,” in MIR ’08: Proceedings of the 2008 ACM International Conference on Multimedia Information Retrieval, (ACM, New York, NY, USA, 2008.

Li, H.

J. Sun, H. Li, and Z. Xu, “Deep ADMM-Net for compressive sensing MRI,” Advances in neural information processing systems, (2016), pp. 10–18.

Li, S.

Li, Y.

Liu, F. L.

F. L. Liu, V. Madhavan, N. Antipa, G. Kuo, S. Kato, and L. Waller, “Single-shot 3D fluorescence microscopy with Fourier DiffuserCam,” in Novel Techniques in Microscopy, (Optical Society of America, 2019), pp. NS2B–3.

Madhavan, V.

F. L. Liu, V. Madhavan, N. Antipa, G. Kuo, S. Kato, and L. Waller, “Single-shot 3D fluorescence microscopy with Fourier DiffuserCam,” in Novel Techniques in Microscopy, (Optical Society of America, 2019), pp. NS2B–3.

Mildenhall, B.

Miyatake, S.

Miyazaki, D.

Monakhova, K.

K. Monakhova, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, and L. Waller, “Lensless learning repository,” https://github.com/Waller-Lab/LenslessLearning/ , (2019). Accessed: 2019-09-05.

Morimoto, T.

Nakamura, Y.

K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased fresnel zone aperture,” in 2017 IEEE International Conference on Computational Photography (ICCP), (2017), pp. 76–82.

Nehmetallah, G.

Ng, R.

N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5(1), 1–9 (2018).
[Crossref]

K. Yanny, N. Antipa, R. Ng, and L. Waller, “Miniature 3D fluorescence microscope using random microlenses,” in Optics and the Brain, (Optical Society of America, 2019), pp. BT3A–4.

G. Kuo, N. Antipa, R. Ng, and L. Waller, “3D Fluorescence Microscopy with DiffuserCam,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2018), pp. CM3E–4.

G. Kuo, N. Antipa, R. Ng, and L. Waller, “DiffuserCam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–2.

N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from Stills: Lensless imaging with rolling shutter,” arXiv preprint arXiv:1905.13221 (2019).

Nguyen, T.

Oare, P.

N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from Stills: Lensless imaging with rolling shutter,” arXiv preprint arXiv:1905.13221 (2019).

Ogura, Y.

R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-Dimensional information acquisition using a compound imaging system,” Opt. Rev. 14(5), 347–350 (2007).
[Crossref]

Parikh, N.

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations Trends Mach. Learning 3(1), 1–122 (2010).
[Crossref]

Peleato, B.

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations Trends Mach. Learning 3(1), 1–122 (2010).
[Crossref]

Robinson, J. T.

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

Ronneberger, O.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.

Roth, S.

U. Schmidt and S. Roth, “Shrinkage fields for effective image restoration,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2014), pp. 2774–2781.

Sankaranarayanan, A.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “FlatCam: Replacing lenses with masks and computation,” in Computer Vision Workshop (ICCVW), 2015 IEEE International Conference on, (IEEE, 2015), pp. 663–666.

Sao, M.

K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased fresnel zone aperture,” in 2017 IEEE International Conference on Computational Photography (ICCP), (2017), pp. 76–82.

Schmidt, U.

U. Schmidt and S. Roth, “Shrinkage fields for effective image restoration,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2014), pp. 2774–2781.

Schneider, A.

P. R. Gill, J. Tringali, A. Schneider, S. Kabir, D. G. Stork, E. Erickson, and M. Kellam, “Thermal Escher Sensors: Pixel-efficient lensless imagers based on tiled optics,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–3.

Shechtman, E.

R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), pp. 586–595.

Shimano, T.

K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased fresnel zone aperture,” in 2017 IEEE International Conference on Computational Photography (ICCP), (2017), pp. 76–82.

Sinha, A.

Sitzmann, V.

S. Diamond, V. Sitzmann, S. Boyd, G. Wetzstein, and F. Heide, “Dirty pixels: Optimizing image classification architectures for raw sensor data,” arXiv preprint arXiv:1701.06487 (2017).

S. Diamond, V. Sitzmann, F. Heide, and G. Wetzstein, “Unrolled optimization with deep priors,” arXiv preprint arXiv:1705.08041 (2017).

Stork, D. G.

D. G. Stork and P. R. Gill, “Optical, mathematical, and computational foundations of lensless ultra-miniature diffractive imagers and sensors,” Int. J. on Adv. Syst. Meas. 7, 4 (2014).

P. R. Gill, J. Tringali, A. Schneider, S. Kabir, D. G. Stork, E. Erickson, and M. Kellam, “Thermal Escher Sensors: Pixel-efficient lensless imagers based on tiled optics,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–3.

Sun, J.

J. Sun, H. Li, and Z. Xu, “Deep ADMM-Net for compressive sensing MRI,” Advances in neural information processing systems, (2016), pp. 10–18.

Tajima, K.

K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased fresnel zone aperture,” in 2017 IEEE International Conference on Computational Photography (ICCP), (2017), pp. 76–82.

Tanida, J.

Teboulle, M.

A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci. 2(1), 183–202 (2009).
[Crossref]

Tian, L.

Tringali, J.

P. R. Gill, J. Tringali, A. Schneider, S. Kabir, D. G. Stork, E. Erickson, and M. Kellam, “Thermal Escher Sensors: Pixel-efficient lensless imagers based on tiled optics,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–3.

Veeraraghavan, A.

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “FlatCam: Replacing lenses with masks and computation,” in Computer Vision Workshop (ICCVW), 2015 IEEE International Conference on, (IEEE, 2015), pp. 663–666.

Vercosa, D. G.

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

Waller, L.

N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5(1), 1–9 (2018).
[Crossref]

K. Yanny, N. Antipa, R. Ng, and L. Waller, “Miniature 3D fluorescence microscope using random microlenses,” in Optics and the Brain, (Optical Society of America, 2019), pp. BT3A–4.

G. Kuo, N. Antipa, R. Ng, and L. Waller, “3D Fluorescence Microscopy with DiffuserCam,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2018), pp. CM3E–4.

F. L. Liu, V. Madhavan, N. Antipa, G. Kuo, S. Kato, and L. Waller, “Single-shot 3D fluorescence microscopy with Fourier DiffuserCam,” in Novel Techniques in Microscopy, (Optical Society of America, 2019), pp. NS2B–3.

N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from Stills: Lensless imaging with rolling shutter,” arXiv preprint arXiv:1905.13221 (2019).

G. Kuo, N. Antipa, R. Ng, and L. Waller, “DiffuserCam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–2.

K. Monakhova, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, and L. Waller, “Lensless learning repository,” https://github.com/Waller-Lab/LenslessLearning/ , (2019). Accessed: 2019-09-05.

Wang, O.

R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), pp. 586–595.

Wetzstein, G.

S. Diamond, V. Sitzmann, F. Heide, and G. Wetzstein, “Unrolled optimization with deep priors,” arXiv preprint arXiv:1705.08041 (2017).

S. Diamond, V. Sitzmann, S. Boyd, G. Wetzstein, and F. Heide, “Dirty pixels: Optimizing image classification architectures for raw sensor data,” arXiv preprint arXiv:1701.06487 (2017).

Xu, Z.

J. Sun, H. Li, and Z. Xu, “Deep ADMM-Net for compressive sensing MRI,” Advances in neural information processing systems, (2016), pp. 10–18.

Xue, Y.

Yamada, K.

Yanny, K.

K. Yanny, N. Antipa, R. Ng, and L. Waller, “Miniature 3D fluorescence microscope using random microlenses,” in Optics and the Brain, (Optical Society of America, 2019), pp. BT3A–4.

K. Monakhova, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, and L. Waller, “Lensless learning repository,” https://github.com/Waller-Lab/LenslessLearning/ , (2019). Accessed: 2019-09-05.

Ye, F.

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

Yurtsever, J.

K. Monakhova, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, and L. Waller, “Lensless learning repository,” https://github.com/Waller-Lab/LenslessLearning/ , (2019). Accessed: 2019-09-05.

Zhang, R.

R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), pp. 586–595.

Appl. Opt. (1)

Foundations Trends Mach. Learning (1)

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations Trends Mach. Learning 3(1), 1–122 (2010).
[Crossref]

Int. J. on Adv. Syst. Meas. (1)

D. G. Stork and P. R. Gill, “Optical, mathematical, and computational foundations of lensless ultra-miniature diffractive imagers and sensors,” Int. J. on Adv. Syst. Meas. 7, 4 (2014).

Opt. Express (1)

Opt. Rev. (1)

R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-Dimensional information acquisition using a compound imaging system,” Opt. Rev. 14(5), 347–350 (2007).
[Crossref]

Optica (3)

Sci. Adv. (1)

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

SIAM J. Imaging Sci. (1)

A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci. 2(1), 183–202 (2009).
[Crossref]

Other (19)

N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from Stills: Lensless imaging with rolling shutter,” arXiv preprint arXiv:1905.13221 (2019).

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “FlatCam: Replacing lenses with masks and computation,” in Computer Vision Workshop (ICCVW), 2015 IEEE International Conference on, (IEEE, 2015), pp. 663–666.

P. R. Gill, J. Tringali, A. Schneider, S. Kabir, D. G. Stork, E. Erickson, and M. Kellam, “Thermal Escher Sensors: Pixel-efficient lensless imagers based on tiled optics,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–3.

K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased fresnel zone aperture,” in 2017 IEEE International Conference on Computational Photography (ICCP), (2017), pp. 76–82.

G. Kuo, N. Antipa, R. Ng, and L. Waller, “DiffuserCam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–2.

G. Kuo, N. Antipa, R. Ng, and L. Waller, “3D Fluorescence Microscopy with DiffuserCam,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2018), pp. CM3E–4.

K. Yanny, N. Antipa, R. Ng, and L. Waller, “Miniature 3D fluorescence microscope using random microlenses,” in Optics and the Brain, (Optical Society of America, 2019), pp. BT3A–4.

F. L. Liu, V. Madhavan, N. Antipa, G. Kuo, S. Kato, and L. Waller, “Single-shot 3D fluorescence microscopy with Fourier DiffuserCam,” in Novel Techniques in Microscopy, (Optical Society of America, 2019), pp. NS2B–3.

K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” in Proceedings of the 27th International Conference on International Conference on Machine Learning, (Omnipress, 2010), pp.399–406.

U. Schmidt and S. Roth, “Shrinkage fields for effective image restoration,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2014), pp. 2774–2781.

S. Diamond, V. Sitzmann, S. Boyd, G. Wetzstein, and F. Heide, “Dirty pixels: Optimizing image classification architectures for raw sensor data,” arXiv preprint arXiv:1701.06487 (2017).

S. Diamond, V. Sitzmann, F. Heide, and G. Wetzstein, “Unrolled optimization with deep priors,” arXiv preprint arXiv:1705.08041 (2017).

J. Sun, H. Li, and Z. Xu, “Deep ADMM-Net for compressive sensing MRI,” Advances in neural information processing systems, (2016), pp. 10–18.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.

R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), pp. 586–595.

G. Bradski, “The OpenCV Library,” Dr. Dobb’s Journal of Software Tools (2000).

M. J. Huiskes and M. S. Lew, “The MIR Flickr Retrieval Evaluation,” in MIR ’08: Proceedings of the 2008 ACM International Conference on Multimedia Information Retrieval, (ACM, New York, NY, USA, 2008.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

K. Monakhova, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, and L. Waller, “Lensless learning repository,” https://github.com/Waller-Lab/LenslessLearning/ , (2019). Accessed: 2019-09-05.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Overview of our imaging pipeline. During training, images are displayed on a computer screen and captured simultaneously with both a lensed and a lensless camera to form training pairs, with the lensed images serving as ground truth labels. The lensless measurements are fed into a model-based network which incorporates knowledge about the physics of the imager. The output of the network is compared with the labels using a loss function and the network parameters are updated through backpropagation. During operation, the lensless imager takes measurements and the trained model-based network is used to reconstruct the images, providing a large speedup in reconstruction time and an improvement in image quality.
Fig. 2.
Fig. 2. Networks on a scale from classic to deep. We will present several networks specifically designed for lensless imaging (Le-ADMM, Le-ADMM*, and Le-ADMM-U). We compare these to classic approaches, which have no learnable parameters, and to purely deep methods which do not include any knowledge of the imaging model. We will show the utility of using an algorithm in this middle range compared to a purely classic or deep method. $\Theta$ summarizes the parameters that are learned for each network as discussed in Section 4.
Fig. 3.
Fig. 3. Model-based network architecture. The input measurement and the calibration PSF are first fed into N layers of unrolled Le-ADMM. At each layer, the updates corresponding to $\mathbf {S}^{k+1}$ in Eq. (4) are applied. The output of this can be fed into an optional denoiser network. The network parameters are updated based on a loss function comparing the output image to the lensed image. Red arrows represent backpropagation through the network parameters.
Fig. 4.
Fig. 4. Test set results, with the raw DiffuserCam measurement (contrast stretched) and the ground truth images from the lensed camera for reference. Le-ADMM (71 ms) has similar image quality to converged ADMM (1.5 s) and better image quality than bounded ADMM (71 ms). Le-ADMM* and Le-ADMM-U have noticeably better visual image quality. The U-Net by itself is unable to reconstruct the appropriate colors and lacks detail.
Fig. 5.
Fig. 5. Network performance on test set. (a) Here we plot the MSE, LPIPS, and Data Fidelity values for all image pairs in our test set. On average, our learned networks (green) are more similar to the ground truth lensed images (lower MSE and LPIPS) than 5 iterations of ADMM. Furthermore, our networks have comparable performance to ADMM (100), which takes 20$\times$ longer than Le-ADMM and Le-ADMM-U. However, the data fidelity term is higher for the learned methods, indicating that these reconstructions are less consistent with the image formation model. (b) Here we plot performance after each layer (or equivalently, each ADMM iteration) in our network, showing that MSE and LPIPS generally decrease throughout the layers. The U-Net denoiser layer in Le-ADMM-U significantly decreases the LPIPS and MSE values, at the cost of data fidelity.
Fig. 6.
Fig. 6. Network performance on objects in the wild (toys and a plant) captured with our lensless camera. We show the raw measurement (contrast stretched) on the top row, followed by converged ADMM, ADMM bounded to 5 iterations, our learned networks, and U-Net for comparison. Our learned networks have similar or better image quality as converged ADMM, and Le-ADMM-U has the best image quality. For instance, Le-ADMM-U is able to capture the details in the sideways plant (second column from left) and the eye of the toy duck (right). The U-Net alone has good image quality, but is missing some colors and details (e.g. the first image is washed out and the nose of the alligator toy is miscolored).
Fig. 7.
Fig. 7. Effect of Training Size. Here we vary the number of images in the training set and plot the LPIPS score after 5 epochs. Here we see that Le-ADMM-U performs better and converges faster than a U-Net alone. Le-ADMM does not improve as the number of training images increases, since it has so few parameters.

Tables (4)

Tables Icon

Table 1. Loss functions

Tables Icon

Table 2. Network performance on test set

Tables Icon

Table 3. Network architecture for U-Net used in Le-ADMM-U

Tables Icon

Table 4. Network architecture for U-Net used in Le-ADMM*.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

b ( x , y ) = crop [ h ( x , y ) x ( x , y ) ] = C H x ,
x ^ = arg min x 0 1 2 b C H x 2 2 + τ Ψ x 1 ,
x ^ = arg min w 0 , u , v 1 2 b C v 2 2 + τ u 1 , s . t . v = H x , u = Ψ x , w = x .
u k + 1 T τ / μ 2 ( Ψ x k + α 2 k / μ 2 ) sparsifying soft-threshold v k + 1 ( C T C + μ 1 I ) 1 ( α 1 k + μ 1 H x k + C T b ) least-squares update w k + 1 max ( α 3 k / μ 3 + x k , 0 ) enforce non-negativity x k + 1 ( μ 1 H T H + μ 2 Ψ T Ψ + μ 3 I ) 1 r k least-squares update α 1 k + 1 α 1 k + μ 1 ( H x k + 1 v k + 1 ) dual for  v α 2 k + 1 α 2 k + μ 2 ( Ψ x k + 1 u k + 1 ) dual for  u α 3 k + 1 α 3 k + μ 3 ( x k + 1 w k + 1 ) dual for  w where  r k = ( ( μ 3 w k + 1 α 3 k ) + Ψ T ( μ 2 u k + 1 α 2 k ) + H T ( μ 1 v k + 1 α 1 k ) ) .
S k + 1 { u k + 1 T τ k ( Ψ ( x k ) + α 2 k / μ 2 k ) sparsifying soft-thresholding v k + 1 ( C T C + μ 1 I ) 1 ( α 1 k + μ 1 k H x k + C T b ) least-squares update w k + 1 max ( α 3 k / μ 3 k + x k , 0 ) enforce non-negativity x k + 1 ( μ 1 k H T H + μ 2 k Ψ T Ψ + μ 3 k I ) 1 r k least-squares update α 1 k + 1 α 1 k + μ 1 k ( H x k + 1 v k + 1 ) dual for v α 2 k + 1 α 2 k + μ 2 k ( Ψ ( x k + 1 ) u k + 1 ) dual for u α 3 k + 1 α 3 k + μ 3 k ( x k + 1 w k + 1 ) dual for w where  r k = ( ( μ 3 k w k + 1 α 3 k ) + Ψ T ( μ 2 k u k + 1 α 2 k ) + H T ( μ 1 k v k + 1 α 1 k ) ) .
S k + 1 { u k + 1 N ( x k ) network regularizer v k + 1 ( C T C + μ 1 I ) 1 ( α 1 k + μ 1 k H x k + C T b ) least-squares update w k + 1 max ( α 3 k / μ 3 k + x k , 0 ) enforce non-negativity x k + 1 ( μ 1 k H T H + μ 2 k I + μ 3 k I ) 1 r k least-squares update α 1 k + 1 α 1 k + μ 1 k ( H x k + 1 v k + 1 ) dual for v α 3 k + 1 α 3 k + μ 3 k ( x k + 1 w k + 1 ) dual for w where  r k = ( ( μ 3 k w k + 1 α 3 k ) + μ 2 k u k + 1 + H T ( μ 1 k v k + 1 α 1 k ) ) .

Metrics