Abstract

Hyperspectral imaging is useful for applications ranging from medical diagnostics to agricultural crop monitoring; however, traditional scanning hyperspectral imagers are prohibitively slow and expensive for widespread adoption. Snapshot techniques exist but are often confined to bulky benchtop setups or have low spatio-spectral resolution. In this paper, we propose a novel, compact, and inexpensive computational camera for snapshot hyperspectral imaging. Our system consists of a tiled spectral filter array placed directly on the image sensor and a diffuser placed close to the sensor. Each point in the world maps to a unique pseudorandom pattern on the spectral filter array, which encodes multiplexed spatio-spectral information. By solving a sparsity-constrained inverse problem, we recover the hyperspectral volume with sub-super-pixel resolution. Our hyperspectral imaging framework is flexible and can be designed with contiguous or non-contiguous spectral filters that can be chosen for a given application. We provide theory for system design, demonstrate a prototype device, and present experimental results with high spatio-spectral resolution.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. INTRODUCTION

Hyperspectral imaging systems aim to capture a 3D spatio-spectral cube containing spectral information for each spatial location. This enables the detection and classification of different material properties through spectral fingerprints, which cannot be seen with a color camera alone. Hyperspectral imaging has been shown to be useful for a variety of applications, from agricultural crop monitoring to medical diagnostics, microscopy, and food quality analysis [110]. Despite the potential utility, commercial hyperspectral cameras range from ${\unicode{x0024}}\$25,000\!-\!{\unicode{x0024}100,000}$ (at the time of publication of this paper). This high price point and the large size have limited the widespread use of hyperspectral imagers.

Traditional hyperspectral imagers rely on scanning either the spectral or spatial dimension of the hyperspectral cube with spectral filters or line-scanning [1113]. These methods can be slow and generally require precise moving parts, increasing the camera complexity. More recently, snapshot techniques have emerged, enabling capture of the full hyperspectral datacube in a single shot. Some snapshot methods trade off spatial resolution for spectral resolution by using a color filter array or splitting up the camera’s field-of-view (FOV). Computational imaging approaches can circumvent this trade-off by spatio-spectrally encoding the incoming light, then solving a compressive sensing inverse problem to recover the spectral cube [14], assuming some structure in the scene. These systems are typically table-top instruments with bulky relay lenses, prisms, or diffractive elements, suitable for laboratory experiments, but not the real world. Recently, several compact snapshot hyperspectral imagers have been demonstrated that encode spatio-spectral information with a single optic, enabling a practical form factor [1517]. Using a single optic to control both the spectral and spatial resolution, they are generally constrained to measuring contiguous spectral bins within a given spectral band.

Here, we propose a new encoding scheme that takes advantage of recent advances in patterned thin film spectral filters [18] and lensless imaging, to achieve high-resolution snapshot hyperspectral imaging in a small form factor. Our system consists of a tiled spectral filter array placed directly onto the sensor and a randomizing phase mask (i.e., diffuser) placed a small distance away from the sensor, as in the DiffuserCam architecture [19]. The diffuser spatially multiplexes the incoming light, such that each spatial point in the world maps to many pixels on the camera. The spectral filter array then spectrally encodes the incoming light via a structured erasure function. The multiplexing effect of the diffuser allows recovery of scene information from a subset of sensor pixels, so we are able to recover the full spatio-spectral cube without the loss in resolution that would result from using a non-multiplexing optic, such as a lens (see Fig. 1).

 

Fig. 1. Overview of the Spectral DiffuserCam imaging pipeline, which reconstructs a hyperspectral datacube from a single-shot 2D measurement. The system consists of a diffuser and spectral filter array bonded to an image sensor. A one-time calibration procedure measures the point spread function (PSF) and filter function. Images are reconstructed using a nonlinear inverse problem solver with a sparsity prior. The result is a 3D hyperspectral cube with 64 channels of spectral information for each of $448 \times 320$ spatial points, generated from a 2D sensor measurement that is $448 \times 320$ pixels.

Download Full Size | PPT Slide | PDF

Our encoding scheme enables hyperspectral recovery in a compact and inexpensive form factor. The spectral filter array can be manufactured directly on the sensor, costing under ${\unicode{x0024}}\$5$ for both the diffuser and the filter array at scale. A key advantage of our system over previous compact snapshot hyperspectral imagers is that it decouples the spectral and spatial responses, enabling a flexible design in which either contiguous or non-contiguous spectral filters with user-selected bandwidths can be chosen. Given some conditions on scene sparsity and the diffuser randomness, the spectral sampling is determined by the spectral filters, and the spatial resolution is determined by the autocorrelation of the diffuser response. This should find use in task-specific/classification applications [2023], where one may wish to tailor the spectral sampling to the application by measuring multiple non-contiguous spectral bands, or have higher-resolution spectral sampling for certain bands.

We present theory for our system, simulations to motivate the need for a diffuser, and experimental results from a prototype system. The main contributions of our paper are:

  • 1. A novel framework for snapshot hyperspectral imaging that combines compressive sensing with spectral filter arrays, enabling compact and inexpensive hyperspectral imaging.
  • 2. Theory and simulations analyzing the system’s spatio-spectral resolution for objects with varying complexity.
  • 3. A prototype device demonstrating snapshot hyperspectral recovery on real data from natural scenes.

2. RELATED WORK

A. Snapshot Hyperspectral Imaging

There have been a variety of snapshot hyperspectral imaging techniques proposed and evaluated over the past decades. Most approaches can be categorized into the following groups: spectral filter array methods, coded aperture methods, speckle-based methods, and dispersion-based methods.

Spectral filter array methods use tiled spectral filter arrays on the sensor to recover the spectral channels of interest [24]. These methods can be viewed as an extension of Bayer filters for red, green, blue (RGB) imaging, since each “super-pixel” in the tiled array has a grid of spectral filters. As the number of filters increases, the spectral resolution increases, and the spatial resolution decreases. For instance, with an $8 \times 8$ filter array (64 spectral channels), the spatial resolution is $8 \times$ worse in each direction than that of the camera sensor. Demosaicing methods have been proposed to improve upon this in post-processing; however, they rely on intelligently guessing information that is not recorded by the sensor [25]. Recently, photonic crystal slabs have been demonstrated for compact spectroscopy based on random spectral responses (as opposed to traditional passband responses) and extended to hyperspectral imaging through the tiling of the photonic crystal slab pixels [26,27]. While these methods have high spectral accuracy, they have only been demonstrated in a $10 \times 10$ spatial pixel configuration. Our system uses a spectral filter array, but combines it with a randomizing diffuser in a lensless imaging architecture, allowing us to recover close to the full spatial resolution of the sensor, which is not possible with traditional lens-based methods. Our method uses traditional passband spectral filters, but could be extended to photonic crystal slabs and other spectral filter designs.

Coded aperture methods use a coded aperture, in combination with a dispersive optical element (e.g., a prism or diffractive grating), in order to modulate the light and encode spatial-spectral information [14,2830]. These systems are able to capture hyperspectral images and videos but tend to be large table-top systems consisting of multiple lenses and optical components. In contrast, our system has a much smaller form factor, requiring only a camera sensor with an attached spectral filter array and a thin diffuser placed close to the sensor.

Speckle-based methods use the wavelength dependence of speckle from a random media to achieve hyperspectral imaging. This has been demonstrated for compact spectrometers [31,32] and has been extended to hyperspectral imaging [15,16]. These systems can be compact, since they require only a sensor and scattering media as their optic; however, their spectral resolution is limited by the speckle correlation through wavelengths. This is challenging to design for a given application, since the spatial and spectral resolutions are highly coupled. In contrast, our system uses spectral filters that can easily be adjusted for a given application and can be selected to have variable bandwidth or non-uniform spectral sampling.

Dispersive methods utilize the dispersion from a prism or diffractive optic to encode spectral information on the sensor. This can be accomplished opportunistically by a prism added to a standard digital single-lens reflex (DSLR) camera [33]. The resulting system has high spatial resolution, equal to that of the camera sensor, but spectral information is encoded only at the edges of objects in the scene, resulting in a highly ill-conditioned problem and lower spectral accuracy. Other methods use a diffuser (as opposed to a prism) as the dispersive element [34]. This can be more compact than prism-based systems and can have improved spatial resolution when combined with an additional RGB camera [35]. To further improve compactness, [17] uses a single diffractive optic as both the lens and the dispersive element, uniquely encoding spectral information in a spectrally rotating point spread function (PSF).

Our system uses a lensless architecture and a spectral filter array, together with sparsity assumptions, to reconstruct 3D hyperspectral information across 64 wavelengths. The design is most similar to [17] and achieves a similar compact size; however, our system achieves better spectral accuracy, and the use of the color filter array and diffuser results in more design flexibility, as our spectral and spatial resolutions are decoupled, enabling custom sensors tailored to specific spectral filter bands that do not need to be contiguous.

B. Lensless Imaging

Lensless, mask-based imaging systems do not have a main lens, but instead use an amplitude or phase mask in place of imaging optics. These systems have been demonstrated for very compact, small form factor 2D imaging [3639]. They are generally amenable to compressive imaging, due to the multiplexing nature of lensless architectures; each point in the scene maps to many pixels on the sensor, allowing a sparse scene to be completely recovered from a subset of sensor pixels [40]. Or, one can reconstruct higher-dimensional functions like 3D [19] or video [41] from a single 2D measurement. In this work, we use diffuser-based lensless imaging to spatially multiplex light onto a repeated spectral filter array, then reconstruct 3D hyperspectral information. Because of the compressed sensing framework, our spatial resolution is better than the array super-pixel size, despite the missing information due to the array.

3. SYSTEM DESIGN OVERVIEW

Our system leverages recent advances in both spectral filter array technology and compressive lensless imaging to decouple the spectral and spatial design. Furthermore, the spectral filter arrays can be deposited directly on the camera sensor. With a diffuser as our multiplexing optic, the system is compact and inexpensive at scale.

To motivate our need for a multiplexing optic instead of an imaging lens, let us consider three candidate architectures: one with a high numerical aperture (NA) lens whose diffraction-limited spot size is matched to the filter pixel size, one with a low-NA lens whose diffraction-limited spot size is matched to the super-pixel size, and finally our design with a diffuser as a multiplexing optic. Figure 2 illustrates these three scenarios with a simplified example of a spectral filter array consisting of $3 \times 3$ spectral filters (nine total) repeated horizontally and vertically. Assume that the monochrome camera sensor has square pixels of lateral size ${N_{\text{pixel}}}$, the spectral filter array has square filters of size ${N_{\text{filter}}}$, and each $3 \times 3$ block of spectral filters creates a super-pixel of size ${N_{\text{super} - \text{pixel}}}$, where ${N_{\text{pixel}}} \lt {N_{\text{filter}}} \lt {N_{\text{super} - \text{pixel}}}$.

 

Fig. 2. Motivation for multiplexing: A high-NA lens captures high-resolution spatial information, but misses the yellow point source, since it comes into focus on a spectral filter pixel designed for blue light. A low-NA lens blurs the image of each point source to be the size of the spectral filter’s super-pixel, capturing accurate spectra at the cost of poor spatial resolution. Our DiffuserCam approach multiplexes the light from each point source across many super-pixels, enabling the computational recovery of both point sources and their spectra without sacrificing spatial resolution. Note that a simplified $3 \times 3$ filter array is shown here for clarity.

Download Full Size | PPT Slide | PDF

 

Fig. 3. Image formation model for a scene with two point sources of different colors, each with narrowband irradiance centered at ${\lambda _y}$ (yellow) and ${\lambda _r}$ (red). The final measurement is the sum of the contributions from each individual spectral filter band in the array. Due to the spatial multiplexing of the lensless architecture, all scene points $\mathbf v(x,y,z)$ project information to multiple spectral filters, which is why we can recover a high-resolution hyperspectral cube from a single image, after solving an inverse problem.

Download Full Size | PPT Slide | PDF

In the high-NA lens case, a point source in the scene will be imaged onto a single filter pixel of the sensor, and thus will only be measured if it is within the passband of that filter; otherwise it will not be recorded [Fig. 2 (left)]. In the low-NA lens case, each point source will be imaged to an area the size of the filter array super-pixel and, thus, recorded by the sensor correctly, but at the price of low spatial-resolution (matched to the super-pixel size) [Fig. 2 (middle)]. In contrast, a multiplexing optic can avoid the gaps in the measurement of the high-NA lens and achieve better resolution than the low-NA case.

A diffuser multiplexes the light from each point source such that it hits many filter pixels, covering all of the spectral bands. And the spatial resolution of the final image can be on the order of the camera pixel size, provided that conditions for compressed sensing are met [Fig. 2 (right)]. In practice, the spatial resolution of our system will be bounded by the autocorrelation of the PSF, as detailed in Section 7, and the diffuser PSF must span multiple super-pixels to ensure that each point in the world is captured. Since compressive recovery is used to recover a 3D hyperspectral cube from a 2D measurement, the resolution is a function of the scene complexity, as described in Section 7.

4. IMAGING FORWARD MODEL

Given our design with a diffuser placed in front of a sensor that has a spectral filter array on top of it, in this section, we outline a forward model for the optical system, illustrated in Fig. 3. This model is a critical piece of our iterative inverse algorithm for hyperspectral reconstruction and will also be used to analyze spatial and spectral resolution.

 

Fig. 4. Experimental calibration of Spectral DiffuserCam. (a) Measured PSF is constant across wavelength. The caustic PSF (contrast-stretched and cropped), before passing through the spectral filter array, is similar at all wavelengths. (b) Measured spectrally varying filter function. The spectral response with the filter array only (no diffuser). Top left, full measurement with illumination by a 458 nm plane wave. The filter array consists of $8 \times 8$ grids of spectral filters repeating in $28 \times 20$ super-pixels. Top right, spectral responses of each of the 64 color channels. Bottom, spectral response of a single super-pixel as illumination wavelength is varied with a monochromator.

Download Full Size | PPT Slide | PDF

A. Spectral Filter Model

The spectral filter array is placed on top of an imaging sensor, such that the exposure on each pixel is the sum of point-wise multiplications with the discrete filter function,

$${\bf L}[x,y] = \sum\limits_{\lambda = 0}^{K - 1} {{\bf F}_\lambda}[x,y] \cdot {\bf v}[x,y,\lambda],$$
where $\cdot$ denotes point-wise multiplication, ${\bf v}[x,y,\lambda]$ is the spectral irradiance incident on the filter array, and ${{\bf F}_\lambda}[x,y]$ is a 3D function describing the transmittance of light through the spectral filter for $K$ wavelength bands, which we call the filter function. In this model, we absorb the sensor’s spectral response into the definition of ${{\bf F}_\lambda}[x,y]$. Our device’s filter function is determined experimentally (see Section 6.C) and is shown in Fig. 4(b). This can be generalized to any arbitrary spectral filter design and does not assume alignment between the filter pixels and the sensor pixels. Here, we focus on the case of a repeating grid of spectral filters, where each “super-pixel” consists of a set of narrowband filters. Our device has an $8 \times 8$ grid of filters in each super-pixel; Fig. 3 illustrates a simplified $3 \times 3$ grid, for clarity.

B. Diffuser Model

The diffuser (a smooth pseudorandom phase optic) in our system achieves spatial multiplexing; this results in a compact form factor and enables reconstruction with spatial resolution better than the super-pixel size via compressed sensing. The diffuser is placed a small distance away from the sensor, and an aperture is placed on the diffuser to limit higher angles. The sensor plane intensity resulting from the diffuser can be modeled as a convolution of the scene, ${\bf v}[x,y,\lambda]$ with the on-axis PSF, ${\bf h}[x,y]$ [37],

$${\bf w}[x,y,\lambda] = \text{crop}({\bf v}[x,y,\lambda]\mathop *\limits^{[x,y]} {\bf h}[x,y])),$$
where $\mathop *\limits^{[x,y]}$ represents a discrete 2D linear convolution over spatial dimensions. The crop function accounts for the finite sensor size. We assume that the PSF does not vary with wavelength and validate this experimentally in Section 6.B. However, this model can be easily extended to include a spectrally varying PSF, ${\bf h}[x,y,\lambda]$ if there is more dispersion across wavelengths.

We assume that objects are placed beyond the hyperfocal distance of the imager; therefore, the PSF has negligible depth-variance, and a 2D convolutional model is valid [37]. If objects are placed within the hyperfocal distance, a 3D model will be needed to account for the depth-variance of the PSF.

C. Combined Model

Combining the spectral filter model with the diffuser model, we have the following discrete forward model:

$${\bf b} = \sum\limits_{\lambda = 0}^{K - 1} {{\bf F}_\lambda}[x,y] \cdot \text{crop}({\bf h}[x,y]\mathop *\limits^{[x,y]} {\bf v}[x,y,\lambda])$$
$$= \sum\limits_{\lambda = 0}^{K - 1} {{\bf F}_\lambda}[x,y] \cdot {\bf w}[x,y,\lambda]$$
$$= {\bf Av}.$$
The linear forward model is represented by the combined operations in matrix ${\bf A}$. Figure 3 illustrates the forward model for several point sources, showing the intermediate variable ${\bf w}[x,y,\lambda]$, which is the scene convolved with the PSF, before point-wise multiplication by the filter function. The final image is the sum over all wavelengths.

5. HYPERSPECTRAL RECONSTRUCTION

To recover the hyperspectral datacube from the 2D measurement, we must solve an underdetermined inverse problem. Since our system falls within the framework of compressive sensing due to our incoherent, multiplexed measurement, we use ${l_1}$ minimization. We use a weighted 3D total variation (3DTV) prior on the scene, as well as a nonnegativity constraint, and a low-rank prior on the spectrum. This can be written as

$${\boldsymbol{\hat v}} = \mathop {\text{argmin}}\limits_{{\bf v} \ge 0} \frac{1}{2}\parallel {\bf b} - {\bf Av}\parallel _2^2 + {\tau _1}\parallel {\nabla _{{xy\lambda}}}{\bf v}{\parallel _1} + {\tau _2}\parallel {\bf v}{\parallel _*},$$
where ${\nabla _{{xy\lambda}}} = [{\nabla _x}{\nabla _y}{\nabla _\lambda}{]^T}$ is the matrix of forward finite differences in the $x$, $y$, and $\lambda$ directions, $\parallel \cdot {\parallel _*}$ represents the nuclear norm, which is the sum of singular values. ${\tau _1}$ and ${\tau _2}$ are the tuning parameters for the 3DTV prior and low-rank priors, respectively. We use the fast iterative shrinkage-thresholding algorithm (FISTA) [42] with weighted anisotropic 3DTV to solve this problem according to [43].

6. IMPLEMENTATION DETAILS

We built a prototype system using a CMOS sensor, a hyperspectral filter array provided by Viavi Solutions (Santa Rosa, CA) [18], and an off-the-shelf diffuser (Luminit 0.5°) placed 1 cm away from the sensor. The sensor has $659 \times 494$ pixels (with a pixel pitch of $9.9\,\unicode{x00B5}\text{m}$), which we crop down to $448 \times 320$ to match the spectral filter array size. The spectral filter array consists of a grid of $28 \times 20$ super-pixels, each with an $8 \times 8$ grid of filter pixels (64 total, spanning the range 386–898 nm). Each filter pixel is $20\,\unicode{x00B5}\text{m}$ in size, covering slightly more than four sensor pixels. The alignment between the sensor pixels and the filter pixels is unknown, requiring a calibration procedure (detailed in Section 6.A). The exposure time is adjusted for each image, ranging from 1 ms–13 ms, which is short enough for video-rate acquisition. The computational reconstruction typically takes 12–24 min (for 500–1000 iterations) on an RTX 2080-Ti GPU using MATLAB.

A. Filter Function Calibration

To calibrate the filter function [${{\bf F}_\lambda}[x,y]$ in Eq. (3)], including the spectral sensitivity of both the sensor and the spectral filter array, we use a Cornerstone 130 1/3 m motorized monochromator (Model 74004). The monochromator creates a narrowband source of 5 nm full width at half-maximum (FWHM), and we measure the filter response (without the diffuser) while sweeping the source by 8 nm increments from 386 nm to 898 nm. The result is shown in Fig. 4(b).

B. PSF Calibration

We also need to calibrate the diffuser response by measuring the diffuser PSF pattern without the spectral filter array. Because the diffuser is relatively smooth with large features (relative to the wavelength of light), the PSF remains relatively constant as a function of wavelength, as shown in Fig. 4(a). Hence, we only need to calibrate for a single wavelength by capturing a single point source calibration image [19]. However, this is not trivial because the spectral filter array is bonded to the sensor and cannot be removed easily. In our setup, we instead take advantage of the fact that our filter array is smaller than our sensor, so we can measure the PSF using the edges of the raw sensor, by shifting the point source to scan the different parts of the PSF over the raw sensor area and stitching the sub-images together. In a system where the filter size is matched to the sensor, this trick will not be possible, but an optimization-based approach could be developed to recover the PSF from measurements.

C. System Non-Idealities

Our reconstruction quality and spectral resolution are limited by two non-idealities in our system. First, our camera development board performs an undefined and uncontrollable nonlinear contrast stretching to all images. This makes the measurement nonlinear and impedes our imaging of dim objects (since the camera performs a larger contrast stretching for dimmer images). Further, our spectral calibration may have errors, since each calibration image cannot be normalized by the intensity of light hitting the sensor. This may cause certain wavelength bands to appear brighter or dimmer than they should in our spectral reconstructions. A better camera board without automatic contrast stretching should fix this problem and provide more quantitative spectral profile reconstructions in the future.

Second, we used a simplified spectral calibration in which we measured the response with uniform spectral sampling, instead of at the true wavelengths of the filters. Due to the mismatch between our calibration scheme (measured every 8 nm with constant bandwidth) and the actual spectral filters (center wavelengths spaced 5–12 nm apart with bandwidths between 6–23 nm), sometimes our calibration wavelengths fall between two filters, resulting in an ambiguity. Given this non-ideal calibration, our effective spectral bands are limited to 49 bands, instead of 64. In our results, we show all 64 bands, but note that some will have overlapping spectral responses. In the future, we will calibrate at the design wavelengths of the filter to fix this issue. Further, the deposition of the spectral filters directly on top of the camera pixels (requiring precise placement during the manufacturing stage) would alleviate the need for this calibration entirely.

7. RESOLUTION ANALYSIS

Here, we derive our theoretical resolution and experimentally validate it with our prototype system. First, we discuss spectral resolution, which is set by the filter bandwidths, and then we compute the expected two-point spatial resolution, based on the PSF autocorrelation. Since our resolution is scene-dependent, we expect the resolution to degrade with scene complexity. To characterize this, we present theory for multi-point resolution based on the condition number analysis introduced in [19]. We compare our system against those with a high-NA and low-NA lens instead of a diffuser. Our results demonstrate two-point spatial resolution of ${\sim}0.19$ super-pixels and multi-point spatial resolution of ${\sim}0.3$ super-pixels for 64 spectral channels ranging from 386–898 nm.

A. Spectral Resolution

Spectral resolution is determined by the spectral channels of the filter array. As such, we expect to be able to resolve the 64 spectral channels present in our spectral filter array. The filters have an average spacing of 8 nm across a 386–898 nm range with bandwidths between 6–23 nm. To validate our spectral resolution, we scan a point source across those wavelengths using a monochromator. Figure 5 shows a sampling of spectral reconstructions overlaid on top of each other, with the shaded blocks indicating the ground truth monochromator spectra. Our reconstructions all match the ground truth peaks within 5 nm of the true wavelength. The small red peaks around 400 nm are artifacts from the monochromator, which emitted a second peak around 400 nm for the longer wavelengths.

B. Two-Point Spatial Resolution

Spatial resolution of our system, in terms of the two-point resolution, will be bounded by that of a lensless imager with the diffuser only (without the spectral filter array). The expected resolution can be defined as the autocorrelation peak half-width at 70% the maximum value [37], Fig. 6(a). For our system, this is ${\sim}3$ sensor pixels, or 0.19 super-pixels. To experimentally measure the spatial resolution of our system, we image two point sources at three different wavelengths (618 nm, 522 nm, 466 nm). The reconstructions in Fig. 6 show that we can resolve two point sources that are 0.19 super-pixels apart for each wavelength and orientation, as determined by applying the Rayleigh criterion. This demonstrates that our system achieves sub-super-pixel spatial resolution, consistent with the expected resolution that would be achieved without the spectral filter array.

 

Fig. 5. Spectral resolution analysis. Sample spectra from hyperspectral reconstructions of narrowband point sources, overlaid on top of each other, with shaded lines indicating the ground truth. For each case, the recovered spectral peak matches the true wavelength within 5 nm.

Download Full Size | PPT Slide | PDF

 

Fig. 6. Spatial resolution analysis. (a) The theoretical resolution of our system, defined as the half-width of the autocorrelation peak at 70% its maximum value, is 0.19 super-pixels. (b) Experimental two-point reconstructions demonstrate 0.19 super-pixel resolution across all wavelengths (slices of the reconstruction shown here), matching the theoretical resolution.

Download Full Size | PPT Slide | PDF

C. Multi-Point Resolution

Because our image reconstruction algorithm contains nonlinear regularization terms, our reconstruction resolution will be object dependent. Hence, two-point resolution measurements are not sufficient for fully characterizing the system resolution, and should be considered a best case scenario. To better predict real-world performance, we perform a local condition number analysis, as introduced in [19], that estimates resolution as a function of object complexity. The local condition number is a proxy for how well the forward model can be inverted, given known support, and is useful for systems such as ours in which the full ${\bf A}$ matrix is never explicitly calculated [44].

The local condition number theory states that given knowledge of the a priori support of the scene, ${\bf v}$, we can form a sub-matrix consisting only of columns of ${\bf A}$ corresponding to the non-zero voxels. The reconstruction problem will be ill-posed if any of the sub-matrices of ${\bf A}$ are ill-conditioned, which can be quantified by the condition number of the sub-matrices. The worst-case condition number will be when sources are near each other; therefore, we compute the condition number for a group of point sources with a separation varying by an integer number of voxels and repeat this for increasing numbers of point sources.

In Fig. 7, we calculate the local condition number for two cases: the 2D spatial reconstruction case, considering only a single spectral channel, and the 3D case, considering points with varying spatial and spectral positions. For comparison, we also simulate the condition number for a low-NA and high-NA lens, as introduced in Section 3. The results show that our diffuser design has a consistently lower condition number than either the low- or high-NA lens, having a condition number below 40 for separation distances of greater than ${\sim}0.3$ super-pixels. The low-NA lens needs a separation distance closer to ${\sim}1$ super-pixel, as expected, and the high-NA lens has an erratic condition number due to the missing information in the measurement.

 

Fig. 7. Condition number analysis for Spectral DiffuserCam, as compared to a low-NA or high-NA lens. (a) Condition numbers for the 2D spatial case (single spectral channel) are calculated by generating different numbers of points on a 2D grid, each with separation distance $d$. (b) Condition numbers for the full spatio-spectral case are calculated on a 3D grid. A condition number below 40 is considered to be good (shown in green). The diffuser has a consistently better performance for small separation distances than either the low-NA or the high-NA lens. The diffuser can resolve objects as low as 0.3 super-pixels apart for more complex scenes, whereas the low-NA lens requires larger separation distances and the high-NA lens suffers errors due to gaps in the measurement.

Download Full Size | PPT Slide | PDF

 

Fig. 8. Simulated hyperspectral reconstructions comparing our Spectral DiffuserCam result with alternative design options. (a) Resolution target with different sections illuminated by narrowband 634 nm (red), 570 nm (green), 474 nm (blue), and broadband (white) sources (ground truth). (b) Reconstruction of the target by Spectral DiffuserCam, (c) a low-NA lens design, and (d) a high-NA lens design, each showing the raw data, false-colored reconstruction, and $\lambda y$ sum projection. The diffuser achieves higher spatial resolution and better accuracy than the low-NA and the high-NA lens.

Download Full Size | PPT Slide | PDF

From this analysis, we can see that, beyond 0.3 super-pixels separation, the condition number for the diffuser does not get arbitrarily worse for increasing scene complexity. Thus, our expected spatial resolution is approximately 0.3 super-pixels.

D. Simulated Resolution Target Reconstruction

Next, we validate the results of our condition number analysis through simulated reconstructions of a resolution target with different spatial locations illuminated by different sources (red, green, blue, and white light), as shown in Fig. 8. For each simulation, we add Gaussian noise with a variance of $1 \times {10^{- 5}}$ and run the reconstruction for 2000 iterations of FISTA with 3DTV. Our system resolves features that are 0.3 super-pixels apart, whereas the low-NA lens can only resolve features that are roughly 1 super-pixel apart, and the high-NA lens results in gaps, validating our predicted performance.

8. EXPERIMENTAL RESULTS

We start with experimental reconstructions of simple objects with known properties—a broadband USAF resolution target displayed on a computer monitor, and a grid of RGB LEDs (Fig. 9). We resolve points that are ${\sim}.3$ super-pixels apart, which matches our expected multi-point resolution based on the condition number analysis above. For the RGB LED scene, the ground truth spectral profiles of the LEDs are measured using a spectrometer, and our recovered spectral profile closely matches the ground truth, as shown in Fig. 9(b).

Next, we show reconstructions of more complex objects, either displayed on a computer monitor or illuminated with two halogen lamps (Fig. 10). We plot the ground truth spectral line profiles, as measured by a Thorlabs CCS200 spectrometer, from four points in the scene, showing that we can accurately recover the spectra. A reference RGB scene is shown for each image, demonstrating that the reconstructions spatially match the expected scene.

 

Fig. 9. (a) Resolution target reconstruction. Experimental reconstruction of a broadband resolution target, showing the $xy$ sum projection (top) and $\lambda y$ sum projection (bottom), demonstrating spatial resolution of 0.3 super-pixels. (b) RGB LED reconstruction. Experimental reconstruction of 10 multi-colored LEDs in a grid with ${\sim}0.4$ super-pixels spacing (four red LEDs on left, four green in middle, two blue at right). We show the $xy$ sum projection (top) and $\lambda y$ sum projection (bottom). The LEDs are clearly resolved spatially and spectrally, and spectral line profiles for each color LED closely match the ground truth spectra from a spectrometer.

Download Full Size | PPT Slide | PDF

9. DISCUSSION

A key advantage of our design over previous work is its flexibility to choose the spectral filters in order to tailor the system to a specific application. For example, one can nonlinearly sample a wide range of wavelengths (which is difficult with many previous snapshot hyperspectral imagers). In the future, we plan to design implementations specific to various task-based applications, which could make hyperspectral imaging more easily adopted, especially since the price is several orders-of-magnitude lower than currently available hyperspectral cameras.

 

Fig. 10. Experimental hyperspectral reconstructions. (a)–(c) Reconstructions of color images displayed on a computer monitor and (d) Thorlabs plush toy placed in front of the imager and illuminated by two Halogen lamps. The raw measurement, false color images, $x \lambda$ sum projections, and spectral line profiles for four spatial points are shown for each scene. The ground truth spectral line profiles, measured using a spectrometer, are plotted in black for reference. Spectral line profiles in (a) and (b) show the average and standard deviation spectral profiles across the area of the box or letter in the object, whereas (c) and (d) show a line profile from a single spatial point in the scene.

Download Full Size | PPT Slide | PDF

Currently, we experimentally achieve a spatial resolution of ${\sim}0.3$ super-pixels, or 5 sensor pixels. In future designs, we should be able to achieve the full sensor resolution (along with better quality reconstructions) by optimizing the randomizing optic, instead of using an off-the-shelf diffuser. This could be achieved by end-to-end optical design [45,46].

Our system has two main limitations: light-throughput and scene-dependence. Due to the use of narrowband spectral filters, much of the light is filtered out by the filters. This provides good spectral accuracy and discrimination, but at the cost of low light-throughput. In addition, since the light is spread by the diffuser over many pixels, the signal-to-noise ratio (SNR) is further decreased. Hence, our imager is not currently suitable for low-light conditions. This light-throughput limitation can be mitigated in the future by the use of photonic crystal slabs instead of narrowband filters, in order to increase light-throughput while maintaining spatio-spectral resolution and accuracy [27]. In addition, end-to-end design of both the spectral filters and the phase mask should improve efficiency, since application-specific designs can use only the set of wavelengths necessary for a particular task, without sampling the in-between wavelengths. Reducing the number of spectral bands improves both light-throughput (because more sensor area will be dedicated to each spectral band) and spatial resolution (because the super-pixels will be smaller).

Our second limitation is scene-dependence, as our reconstruction algorithm relies on object sparsity (e.g., sparse gradients). Because of the nonlinear regularization term, it is difficult to predict performance, and one might suffer artifacts if the scene is not sufficiently sparse. Recent advances in machine learning and inverse problems seek to provide better signal representations, enabling the reconstruction of more complicated, denser scenes [47,48]. In addition, machine learning could be useful in speeding up the reconstruction algorithm [49] as well as potentially utilizing the imager more directly for a higher-level task, such as classification [50].

10. CONCLUSION

Our work presents a new hyperspectral imaging modality that combines a color filter array and lensless imaging techniques for an ultra-compact and inexpensive hyperspectral camera. The spectral filter array encodes spectral information onto the sensor, and the diffuser multiplexes the incoming light such that each point in the world maps to many spectral filters. The multiplexed nature of the measurement allows us to use compressive sensing to reconstruct high spatio-spectral resolution from a single 2D measurement. We provided an analysis for the expected resolution of our imager and experimentally characterized the two-point and multi-point resolution of the system. Finally, we built a prototype and demonstrated reconstructions of complex spatio-spectral scenes, achieving up to 0.19 super-pixel spatial resolution across 64 spectral bands.

Funding

National Science Foundation (DGE 1752814, DMR 1548924); Gordon and Betty Moore Foundation (GBMF4562).

Acknowledgment

The authors would like to thank Viavi Solutions (Santa Rosa, CA), and particularly Bill Houck, for their technical help and support, as well as Nick Antipa and Grace Kuo for helpful discussions. This work was supported by the Gordon and Betty Moore Foundation Data-Driven Discovery Initiative Grant GBMF4562, and STROBE: A National Science Foundation Science & Technology Center under Grant No. DMR 1548924. Kristina Monakhova and Kyrollos Yanny acknowledge funding from the National Science Foundation Graduate Research Fellowship Program (NSF GRFP) (DGE 1752814). The camera and spectral filter array were provided by Viavi Solutions (Santa Rosa, CA).

Disclosures

The authors declare no conflicts of interest.

REFERENCES

1. S. Delalieux, A. Auwerkerken, W. W. Verstraeten, B. Somers, R. Valcke, S. Lhermitte, J. Keulemans, and P. Coppin, “Hyperspectral reflectance and fluorescence imaging to detect scab induced stress in apple leaves,” Remote Sens. 1, 858–874 (2009). [CrossRef]  

2. R. T. Kester, N. Bedard, L. S. Gao, and T. S. Tkaczyk, “Real-time snapshot hyperspectral imaging endoscope,” J. Biomed. Opt. 16, 056005 (2011). [CrossRef]  

3. G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” J. Biomed. Opt. 19, 010901 (2014). [CrossRef]  

4. D.-W. Sun, Hyperspectral Imaging for Food Quality Analysis and Control (Elsevier, 2010).

5. A. Gowen, C. O’Donnell, P. Cullen, G. Downey, and J. Frias, “Hyperspectral imaging–an emerging process analytical tool for food quality and safety control,” Trends Food Sci. Technol. 18, 590–598 (2007). [CrossRef]  

6. H. Akbari, L. Halig, D. M. Schuster, B. Fei, A. Osunkoya, V. Master, P. Nieh, and G. Chen, “Hyperspectral imaging and quantitative analysis for prostate cancer detection,” J. Biomed. Opt. 17, 0760051 (2012). [CrossRef]  

7. G. Lu, L. V. Halig, D. Wang, X. Qin, Z. G. Chen, and B. Fei, “Spectral-spatial classification for noninvasive cancer detection using hyperspectral imaging,” J. Biomed. Opt. 19, 106004 (2014). [CrossRef]  

8. A. Orth, M. J. Tomaszewski, R. N. Ghosh, and E. Schonbrun, “Gigapixel multispectral microscopy,” Optica 2, 654–662 (2015). [CrossRef]  

9. W. Huang, J. Li, Q. Wang, and L. Chen, “Development of a multispectral imaging system for online detection of bruises on apples,” J. Food Eng. 146, 62–71 (2015). [CrossRef]  

10. C. P. Bacon, Y. Mattley, and R. DeFrece, “Miniature spectroscopic instrumentation: applications to biology and chemistry,” Rev. Sci. Instrum. 75, 1–16 (2004). [CrossRef]  

11. R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, and M. R. Olah, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998). [CrossRef]  

12. N. Gat, “Imaging spectroscopy using tunable filters: a review,” Proc. SPIE 4056, 50–64 (2000). [CrossRef]  

13. C. Zhang, M. Rosenberger, A. Breitbarth, and G. Notni, “A novel 3D multispectral vision system based on filter wheel cameras,” in IEEE International Conference on Imaging Systems and Techniques (IST) (IEEE, 2016), pp. 267–272.

14. A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 47, B44–B51 (2008). [CrossRef]  

15. S. K. Sahoo, D. Tang, and C. Dang, “Single-shot multispectral imaging with a monochromatic camera,” Optica 4, 1209–1213 (2017). [CrossRef]  

16. R. French, S. Gigan, and O. L. Muskens, “Speckle-based hyperspectral imaging combining multiple scattering and compressive sensing in nanowire mats,” Opt. Lett. 42, 1820–1823 (2017). [CrossRef]  

17. D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graphics 38, 117 (2019). [CrossRef]  

18. S. Saxe, L. Sun, V. Smith, D. Meysing, C. Hsiung, A. Houck, M. Von Gunten, C. Hruska, D. Martin, R. Bradley, J. Amoroso, M. Klimek, and W. Houck, “Advances in miniaturized spectral sensors,” Proc. SPIE 10657, 106570B (2018). [CrossRef]  

19. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5, 1–9 (2018). [CrossRef]  

20. V. Saragadam and A. C. Sankaranarayanan, “Programmable spectrometry: per-pixel material classification using learned spectral filters,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2020), pp. 1–10.

21. K. Chao, C. Yang, Y. Chen, M. Kim, and D. Chan, “Hyperspectral-multispectral line-scan imaging system for automated poultry carcass inspection applications for food safety,” Poult. Sci. 86, 2450–2460 (2007). [CrossRef]  

22. R. M. Levenson, D. T. Lynch, H. Kobayashi, J. M. Backer, and M. V. Backer, “Multiplexing with multispectral imaging: from mice to microscopy,” ILAR J. 49, 78–88 (2008). [CrossRef]  

23. A. Hennessy, K. Clarke, and M. Lewis, “Hyperspectral classification of plants: a review of waveband selection generalisability,” Remote Sens. 12, 113 (2020). [CrossRef]  

24. P.-J. Lapray, X. Wang, J.-B. Thomas, and P. Gouton, “Multispectral filter arrays: recent advances and practical implementation,” Sensors 14, 21626–21659 (2014). [CrossRef]  

25. S. Mihoubi, O. Losson, B. Mathon, and L. Macaire, “Multispectral demosaicing using pseudo-panchromatic image,” IEEE Trans. Comput. Imaging 3, 982–995 (2017). [CrossRef]  

26. Z. Wang and Z. Yu, “Spectral analysis based on compressive sensing in nanophotonic structures,” Opt. Express 22, 25608–25614 (2014). [CrossRef]  

27. Z. Wang, S. Yi, A. Chen, M. Zhou, T. S. Luk, A. James, J. Nogan, W. Ross, G. Joe, A. Shahsafi, K. X. Wang, M. A. Kats, and Z. Yu, “Single-shot on-chip spectral sensors based on photonic crystal slabs,” Nat. Commun. 10, 1020 (2019). [CrossRef]  

28. M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Opt. Express 15, 14013–14027 (2007). [CrossRef]  

29. X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Trans. Graphics 33, 233 (2014). [CrossRef]  

30. X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process. Mag. 33, 95–108 (2016). [CrossRef]  

31. B. Redding, S. F. Liew, R. Sarma, and H. Cao, “Compact spectrometer based on a disordered photonic chip,” Nat. Photonics 7, 746–751 (2013). [CrossRef]  

32. M. Chakrabarti, M. L. Jakobsen, and S. G. Hanson, “Speckle-based spectrometer,” Opt. Lett. 40, 3264–3267 (2015). [CrossRef]  

33. S.-H. Baek, I. Kim, D. Gutierrez, and M. H. Kim, “Compact single-shot hyperspectral imaging using a prism,” ACM Trans. Graphics 36, 217 (2017). [CrossRef]  

34. M. A. Golub, A. Averbuch, M. Nathan, V. A. Zheludev, J. Hauser, S. Gurevitch, R. Malinsky, and A. Kagan, “Compressed sensing snapshot spectral imaging by a regular digital camera with an added optical diffuser,” Appl. Opt. 55, 432–443 (2016). [CrossRef]  

35. J. Hauser, M. A. Golub, A. Averbuch, M. Nathan, V. A. Zheludev, and M. Kagan, “Dual-camera snapshot spectral imaging with a pupil-domain optical diffuser and compressed sensing algorithms,” Appl. Opt. 59, 1058–1070 (2020). [CrossRef]  

36. M. S. Asif, A. Ayremlou, A. Sankaranarayanan, A. Veeraraghavan, and R. G. Baraniuk, “FlatCam: thin, lensless cameras using coded aperture and computation,” IEEE Trans. Comput. Imaging 3, 384–397 (2016). [CrossRef]  

37. G. Kuo, N. Antipa, R. Ng, and L. Waller, “Diffusercam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging (Optical Society of America, 2017), paper CTu3B–2.

38. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics (TOMBO): concept and experimental verification,” Appl. Opt. 40, 1806–1813 (2001). [CrossRef]  

39. J. Tanida, R. Shogenji, Y. Kitamura, K. Yamada, M. Miyamoto, and S. Miyatake, “Color imaging with an integrated compound imaging system,” Opt. Express 11, 2109–2117 (2003). [CrossRef]  

40. R. Fergus, A. Torralba, and W. T. Freeman, “Random lens imaging,” MIT CSAIL Technical Report 2006-058 (2006).

41. N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from stills: lensless imaging with rolling shutter,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2019), pp. 1–8.

42. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci. 2, 183–202 (2009). [CrossRef]  

43. U. S. Kamilov, “A parallel proximal algorithm for anisotropic total variation minimization,” IEEE Trans. Image Process. 26, 539–548 (2016). [CrossRef]  

44. E. J. Candès and C. Fernandez-Granda, “Towards a mathematical theory of super-resolution,” Commun. Pure Appl. Math. 67, 906–956 (2014). [CrossRef]  

45. V. Sitzmann, S. Diamond, Y. Peng, X. Dun, S. Boyd, W. Heidrich, F. Heide, and G. Wetzstein, “End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging,” ACM Trans. Graphics 37, 114 (2018). [CrossRef]  

46. Y. Peng, Q. Sun, X. Dun, G. Wetzstein, W. Heidrich, and F. Heide, “Learned large field-of-view imaging with thin-plate optics,” ACM Trans. Graphics 38, 219 (2019). [CrossRef]  

47. Z. Liu and J. Scarlett, “Information-theoretic lower bounds for compressive sensing with generative models,” IEEE J. Sel. Areas Inf. Theory 1, 292–303 (2020). [CrossRef]  

48. A. Bora, A. Jalal, E. Price, and A. G. Dimakis, “Compressed sensing using generative models,” arXiv:1703.03208 (2017).

49. K. Monakhova, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, and L. Waller, “Learned reconstructions for practical mask-based lensless imaging,” Opt. Express 27, 28075–28090 (2019). [CrossRef]  

50. S. Diamond, V. Sitzmann, S. Boyd, G. Wetzstein, and F. Heide, “Dirty pixels: optimizing image classification architectures for raw sensor data,” arXiv:1701.06487 (2017).

References

  • View by:
  • |
  • |
  • |

  1. S. Delalieux, A. Auwerkerken, W. W. Verstraeten, B. Somers, R. Valcke, S. Lhermitte, J. Keulemans, and P. Coppin, “Hyperspectral reflectance and fluorescence imaging to detect scab induced stress in apple leaves,” Remote Sens. 1, 858–874 (2009).
    [Crossref]
  2. R. T. Kester, N. Bedard, L. S. Gao, and T. S. Tkaczyk, “Real-time snapshot hyperspectral imaging endoscope,” J. Biomed. Opt. 16, 056005 (2011).
    [Crossref]
  3. G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” J. Biomed. Opt. 19, 010901 (2014).
    [Crossref]
  4. D.-W. Sun, Hyperspectral Imaging for Food Quality Analysis and Control (Elsevier, 2010).
  5. A. Gowen, C. O’Donnell, P. Cullen, G. Downey, and J. Frias, “Hyperspectral imaging–an emerging process analytical tool for food quality and safety control,” Trends Food Sci. Technol. 18, 590–598 (2007).
    [Crossref]
  6. H. Akbari, L. Halig, D. M. Schuster, B. Fei, A. Osunkoya, V. Master, P. Nieh, and G. Chen, “Hyperspectral imaging and quantitative analysis for prostate cancer detection,” J. Biomed. Opt. 17, 0760051 (2012).
    [Crossref]
  7. G. Lu, L. V. Halig, D. Wang, X. Qin, Z. G. Chen, and B. Fei, “Spectral-spatial classification for noninvasive cancer detection using hyperspectral imaging,” J. Biomed. Opt. 19, 106004 (2014).
    [Crossref]
  8. A. Orth, M. J. Tomaszewski, R. N. Ghosh, and E. Schonbrun, “Gigapixel multispectral microscopy,” Optica 2, 654–662 (2015).
    [Crossref]
  9. W. Huang, J. Li, Q. Wang, and L. Chen, “Development of a multispectral imaging system for online detection of bruises on apples,” J. Food Eng. 146, 62–71 (2015).
    [Crossref]
  10. C. P. Bacon, Y. Mattley, and R. DeFrece, “Miniature spectroscopic instrumentation: applications to biology and chemistry,” Rev. Sci. Instrum. 75, 1–16 (2004).
    [Crossref]
  11. R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, and M. R. Olah, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998).
    [Crossref]
  12. N. Gat, “Imaging spectroscopy using tunable filters: a review,” Proc. SPIE 4056, 50–64 (2000).
    [Crossref]
  13. C. Zhang, M. Rosenberger, A. Breitbarth, and G. Notni, “A novel 3D multispectral vision system based on filter wheel cameras,” in IEEE International Conference on Imaging Systems and Techniques (IST) (IEEE, 2016), pp. 267–272.
  14. A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 47, B44–B51 (2008).
    [Crossref]
  15. S. K. Sahoo, D. Tang, and C. Dang, “Single-shot multispectral imaging with a monochromatic camera,” Optica 4, 1209–1213 (2017).
    [Crossref]
  16. R. French, S. Gigan, and O. L. Muskens, “Speckle-based hyperspectral imaging combining multiple scattering and compressive sensing in nanowire mats,” Opt. Lett. 42, 1820–1823 (2017).
    [Crossref]
  17. D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graphics 38, 117 (2019).
    [Crossref]
  18. S. Saxe, L. Sun, V. Smith, D. Meysing, C. Hsiung, A. Houck, M. Von Gunten, C. Hruska, D. Martin, R. Bradley, J. Amoroso, M. Klimek, and W. Houck, “Advances in miniaturized spectral sensors,” Proc. SPIE 10657, 106570B (2018).
    [Crossref]
  19. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5, 1–9 (2018).
    [Crossref]
  20. V. Saragadam and A. C. Sankaranarayanan, “Programmable spectrometry: per-pixel material classification using learned spectral filters,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2020), pp. 1–10.
  21. K. Chao, C. Yang, Y. Chen, M. Kim, and D. Chan, “Hyperspectral-multispectral line-scan imaging system for automated poultry carcass inspection applications for food safety,” Poult. Sci. 86, 2450–2460 (2007).
    [Crossref]
  22. R. M. Levenson, D. T. Lynch, H. Kobayashi, J. M. Backer, and M. V. Backer, “Multiplexing with multispectral imaging: from mice to microscopy,” ILAR J. 49, 78–88 (2008).
    [Crossref]
  23. A. Hennessy, K. Clarke, and M. Lewis, “Hyperspectral classification of plants: a review of waveband selection generalisability,” Remote Sens. 12, 113 (2020).
    [Crossref]
  24. P.-J. Lapray, X. Wang, J.-B. Thomas, and P. Gouton, “Multispectral filter arrays: recent advances and practical implementation,” Sensors 14, 21626–21659 (2014).
    [Crossref]
  25. S. Mihoubi, O. Losson, B. Mathon, and L. Macaire, “Multispectral demosaicing using pseudo-panchromatic image,” IEEE Trans. Comput. Imaging 3, 982–995 (2017).
    [Crossref]
  26. Z. Wang and Z. Yu, “Spectral analysis based on compressive sensing in nanophotonic structures,” Opt. Express 22, 25608–25614 (2014).
    [Crossref]
  27. Z. Wang, S. Yi, A. Chen, M. Zhou, T. S. Luk, A. James, J. Nogan, W. Ross, G. Joe, A. Shahsafi, K. X. Wang, M. A. Kats, and Z. Yu, “Single-shot on-chip spectral sensors based on photonic crystal slabs,” Nat. Commun. 10, 1020 (2019).
    [Crossref]
  28. M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Opt. Express 15, 14013–14027 (2007).
    [Crossref]
  29. X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Trans. Graphics 33, 233 (2014).
    [Crossref]
  30. X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process. Mag. 33, 95–108 (2016).
    [Crossref]
  31. B. Redding, S. F. Liew, R. Sarma, and H. Cao, “Compact spectrometer based on a disordered photonic chip,” Nat. Photonics 7, 746–751 (2013).
    [Crossref]
  32. M. Chakrabarti, M. L. Jakobsen, and S. G. Hanson, “Speckle-based spectrometer,” Opt. Lett. 40, 3264–3267 (2015).
    [Crossref]
  33. S.-H. Baek, I. Kim, D. Gutierrez, and M. H. Kim, “Compact single-shot hyperspectral imaging using a prism,” ACM Trans. Graphics 36, 217 (2017).
    [Crossref]
  34. M. A. Golub, A. Averbuch, M. Nathan, V. A. Zheludev, J. Hauser, S. Gurevitch, R. Malinsky, and A. Kagan, “Compressed sensing snapshot spectral imaging by a regular digital camera with an added optical diffuser,” Appl. Opt. 55, 432–443 (2016).
    [Crossref]
  35. J. Hauser, M. A. Golub, A. Averbuch, M. Nathan, V. A. Zheludev, and M. Kagan, “Dual-camera snapshot spectral imaging with a pupil-domain optical diffuser and compressed sensing algorithms,” Appl. Opt. 59, 1058–1070 (2020).
    [Crossref]
  36. M. S. Asif, A. Ayremlou, A. Sankaranarayanan, A. Veeraraghavan, and R. G. Baraniuk, “FlatCam: thin, lensless cameras using coded aperture and computation,” IEEE Trans. Comput. Imaging 3, 384–397 (2016).
    [Crossref]
  37. G. Kuo, N. Antipa, R. Ng, and L. Waller, “Diffusercam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging (Optical Society of America, 2017), paper CTu3B–2.
  38. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics (TOMBO): concept and experimental verification,” Appl. Opt. 40, 1806–1813 (2001).
    [Crossref]
  39. J. Tanida, R. Shogenji, Y. Kitamura, K. Yamada, M. Miyamoto, and S. Miyatake, “Color imaging with an integrated compound imaging system,” Opt. Express 11, 2109–2117 (2003).
    [Crossref]
  40. R. Fergus, A. Torralba, and W. T. Freeman, “Random lens imaging,” MIT CSAIL Technical Report 2006-058 (2006).
  41. N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from stills: lensless imaging with rolling shutter,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2019), pp. 1–8.
  42. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci. 2, 183–202 (2009).
    [Crossref]
  43. U. S. Kamilov, “A parallel proximal algorithm for anisotropic total variation minimization,” IEEE Trans. Image Process. 26, 539–548 (2016).
    [Crossref]
  44. E. J. Candès and C. Fernandez-Granda, “Towards a mathematical theory of super-resolution,” Commun. Pure Appl. Math. 67, 906–956 (2014).
    [Crossref]
  45. V. Sitzmann, S. Diamond, Y. Peng, X. Dun, S. Boyd, W. Heidrich, F. Heide, and G. Wetzstein, “End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging,” ACM Trans. Graphics 37, 114 (2018).
    [Crossref]
  46. Y. Peng, Q. Sun, X. Dun, G. Wetzstein, W. Heidrich, and F. Heide, “Learned large field-of-view imaging with thin-plate optics,” ACM Trans. Graphics 38, 219 (2019).
    [Crossref]
  47. Z. Liu and J. Scarlett, “Information-theoretic lower bounds for compressive sensing with generative models,” IEEE J. Sel. Areas Inf. Theory 1, 292–303 (2020).
    [Crossref]
  48. A. Bora, A. Jalal, E. Price, and A. G. Dimakis, “Compressed sensing using generative models,” arXiv:1703.03208 (2017).
  49. K. Monakhova, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, and L. Waller, “Learned reconstructions for practical mask-based lensless imaging,” Opt. Express 27, 28075–28090 (2019).
    [Crossref]
  50. S. Diamond, V. Sitzmann, S. Boyd, G. Wetzstein, and F. Heide, “Dirty pixels: optimizing image classification architectures for raw sensor data,” arXiv:1701.06487 (2017).

2020 (3)

A. Hennessy, K. Clarke, and M. Lewis, “Hyperspectral classification of plants: a review of waveband selection generalisability,” Remote Sens. 12, 113 (2020).
[Crossref]

J. Hauser, M. A. Golub, A. Averbuch, M. Nathan, V. A. Zheludev, and M. Kagan, “Dual-camera snapshot spectral imaging with a pupil-domain optical diffuser and compressed sensing algorithms,” Appl. Opt. 59, 1058–1070 (2020).
[Crossref]

Z. Liu and J. Scarlett, “Information-theoretic lower bounds for compressive sensing with generative models,” IEEE J. Sel. Areas Inf. Theory 1, 292–303 (2020).
[Crossref]

2019 (4)

K. Monakhova, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, and L. Waller, “Learned reconstructions for practical mask-based lensless imaging,” Opt. Express 27, 28075–28090 (2019).
[Crossref]

Y. Peng, Q. Sun, X. Dun, G. Wetzstein, W. Heidrich, and F. Heide, “Learned large field-of-view imaging with thin-plate optics,” ACM Trans. Graphics 38, 219 (2019).
[Crossref]

Z. Wang, S. Yi, A. Chen, M. Zhou, T. S. Luk, A. James, J. Nogan, W. Ross, G. Joe, A. Shahsafi, K. X. Wang, M. A. Kats, and Z. Yu, “Single-shot on-chip spectral sensors based on photonic crystal slabs,” Nat. Commun. 10, 1020 (2019).
[Crossref]

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graphics 38, 117 (2019).
[Crossref]

2018 (3)

S. Saxe, L. Sun, V. Smith, D. Meysing, C. Hsiung, A. Houck, M. Von Gunten, C. Hruska, D. Martin, R. Bradley, J. Amoroso, M. Klimek, and W. Houck, “Advances in miniaturized spectral sensors,” Proc. SPIE 10657, 106570B (2018).
[Crossref]

N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5, 1–9 (2018).
[Crossref]

V. Sitzmann, S. Diamond, Y. Peng, X. Dun, S. Boyd, W. Heidrich, F. Heide, and G. Wetzstein, “End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging,” ACM Trans. Graphics 37, 114 (2018).
[Crossref]

2017 (4)

S. Mihoubi, O. Losson, B. Mathon, and L. Macaire, “Multispectral demosaicing using pseudo-panchromatic image,” IEEE Trans. Comput. Imaging 3, 982–995 (2017).
[Crossref]

S. K. Sahoo, D. Tang, and C. Dang, “Single-shot multispectral imaging with a monochromatic camera,” Optica 4, 1209–1213 (2017).
[Crossref]

R. French, S. Gigan, and O. L. Muskens, “Speckle-based hyperspectral imaging combining multiple scattering and compressive sensing in nanowire mats,” Opt. Lett. 42, 1820–1823 (2017).
[Crossref]

S.-H. Baek, I. Kim, D. Gutierrez, and M. H. Kim, “Compact single-shot hyperspectral imaging using a prism,” ACM Trans. Graphics 36, 217 (2017).
[Crossref]

2016 (4)

M. A. Golub, A. Averbuch, M. Nathan, V. A. Zheludev, J. Hauser, S. Gurevitch, R. Malinsky, and A. Kagan, “Compressed sensing snapshot spectral imaging by a regular digital camera with an added optical diffuser,” Appl. Opt. 55, 432–443 (2016).
[Crossref]

M. S. Asif, A. Ayremlou, A. Sankaranarayanan, A. Veeraraghavan, and R. G. Baraniuk, “FlatCam: thin, lensless cameras using coded aperture and computation,” IEEE Trans. Comput. Imaging 3, 384–397 (2016).
[Crossref]

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process. Mag. 33, 95–108 (2016).
[Crossref]

U. S. Kamilov, “A parallel proximal algorithm for anisotropic total variation minimization,” IEEE Trans. Image Process. 26, 539–548 (2016).
[Crossref]

2015 (3)

2014 (6)

G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” J. Biomed. Opt. 19, 010901 (2014).
[Crossref]

G. Lu, L. V. Halig, D. Wang, X. Qin, Z. G. Chen, and B. Fei, “Spectral-spatial classification for noninvasive cancer detection using hyperspectral imaging,” J. Biomed. Opt. 19, 106004 (2014).
[Crossref]

X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Trans. Graphics 33, 233 (2014).
[Crossref]

P.-J. Lapray, X. Wang, J.-B. Thomas, and P. Gouton, “Multispectral filter arrays: recent advances and practical implementation,” Sensors 14, 21626–21659 (2014).
[Crossref]

Z. Wang and Z. Yu, “Spectral analysis based on compressive sensing in nanophotonic structures,” Opt. Express 22, 25608–25614 (2014).
[Crossref]

E. J. Candès and C. Fernandez-Granda, “Towards a mathematical theory of super-resolution,” Commun. Pure Appl. Math. 67, 906–956 (2014).
[Crossref]

2013 (1)

B. Redding, S. F. Liew, R. Sarma, and H. Cao, “Compact spectrometer based on a disordered photonic chip,” Nat. Photonics 7, 746–751 (2013).
[Crossref]

2012 (1)

H. Akbari, L. Halig, D. M. Schuster, B. Fei, A. Osunkoya, V. Master, P. Nieh, and G. Chen, “Hyperspectral imaging and quantitative analysis for prostate cancer detection,” J. Biomed. Opt. 17, 0760051 (2012).
[Crossref]

2011 (1)

R. T. Kester, N. Bedard, L. S. Gao, and T. S. Tkaczyk, “Real-time snapshot hyperspectral imaging endoscope,” J. Biomed. Opt. 16, 056005 (2011).
[Crossref]

2009 (2)

S. Delalieux, A. Auwerkerken, W. W. Verstraeten, B. Somers, R. Valcke, S. Lhermitte, J. Keulemans, and P. Coppin, “Hyperspectral reflectance and fluorescence imaging to detect scab induced stress in apple leaves,” Remote Sens. 1, 858–874 (2009).
[Crossref]

A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci. 2, 183–202 (2009).
[Crossref]

2008 (2)

A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 47, B44–B51 (2008).
[Crossref]

R. M. Levenson, D. T. Lynch, H. Kobayashi, J. M. Backer, and M. V. Backer, “Multiplexing with multispectral imaging: from mice to microscopy,” ILAR J. 49, 78–88 (2008).
[Crossref]

2007 (3)

M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Opt. Express 15, 14013–14027 (2007).
[Crossref]

K. Chao, C. Yang, Y. Chen, M. Kim, and D. Chan, “Hyperspectral-multispectral line-scan imaging system for automated poultry carcass inspection applications for food safety,” Poult. Sci. 86, 2450–2460 (2007).
[Crossref]

A. Gowen, C. O’Donnell, P. Cullen, G. Downey, and J. Frias, “Hyperspectral imaging–an emerging process analytical tool for food quality and safety control,” Trends Food Sci. Technol. 18, 590–598 (2007).
[Crossref]

2004 (1)

C. P. Bacon, Y. Mattley, and R. DeFrece, “Miniature spectroscopic instrumentation: applications to biology and chemistry,” Rev. Sci. Instrum. 75, 1–16 (2004).
[Crossref]

2003 (1)

2001 (1)

2000 (1)

N. Gat, “Imaging spectroscopy using tunable filters: a review,” Proc. SPIE 4056, 50–64 (2000).
[Crossref]

1998 (1)

R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, and M. R. Olah, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998).
[Crossref]

Akbari, H.

H. Akbari, L. Halig, D. M. Schuster, B. Fei, A. Osunkoya, V. Master, P. Nieh, and G. Chen, “Hyperspectral imaging and quantitative analysis for prostate cancer detection,” J. Biomed. Opt. 17, 0760051 (2012).
[Crossref]

Amoroso, J.

S. Saxe, L. Sun, V. Smith, D. Meysing, C. Hsiung, A. Houck, M. Von Gunten, C. Hruska, D. Martin, R. Bradley, J. Amoroso, M. Klimek, and W. Houck, “Advances in miniaturized spectral sensors,” Proc. SPIE 10657, 106570B (2018).
[Crossref]

Antipa, N.

K. Monakhova, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, and L. Waller, “Learned reconstructions for practical mask-based lensless imaging,” Opt. Express 27, 28075–28090 (2019).
[Crossref]

N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5, 1–9 (2018).
[Crossref]

G. Kuo, N. Antipa, R. Ng, and L. Waller, “Diffusercam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging (Optical Society of America, 2017), paper CTu3B–2.

N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from stills: lensless imaging with rolling shutter,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2019), pp. 1–8.

Aronsson, M.

R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, and M. R. Olah, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998).
[Crossref]

Asif, M. S.

M. S. Asif, A. Ayremlou, A. Sankaranarayanan, A. Veeraraghavan, and R. G. Baraniuk, “FlatCam: thin, lensless cameras using coded aperture and computation,” IEEE Trans. Comput. Imaging 3, 384–397 (2016).
[Crossref]

Auwerkerken, A.

S. Delalieux, A. Auwerkerken, W. W. Verstraeten, B. Somers, R. Valcke, S. Lhermitte, J. Keulemans, and P. Coppin, “Hyperspectral reflectance and fluorescence imaging to detect scab induced stress in apple leaves,” Remote Sens. 1, 858–874 (2009).
[Crossref]

Averbuch, A.

Ayremlou, A.

M. S. Asif, A. Ayremlou, A. Sankaranarayanan, A. Veeraraghavan, and R. G. Baraniuk, “FlatCam: thin, lensless cameras using coded aperture and computation,” IEEE Trans. Comput. Imaging 3, 384–397 (2016).
[Crossref]

Backer, J. M.

R. M. Levenson, D. T. Lynch, H. Kobayashi, J. M. Backer, and M. V. Backer, “Multiplexing with multispectral imaging: from mice to microscopy,” ILAR J. 49, 78–88 (2008).
[Crossref]

Backer, M. V.

R. M. Levenson, D. T. Lynch, H. Kobayashi, J. M. Backer, and M. V. Backer, “Multiplexing with multispectral imaging: from mice to microscopy,” ILAR J. 49, 78–88 (2008).
[Crossref]

Bacon, C. P.

C. P. Bacon, Y. Mattley, and R. DeFrece, “Miniature spectroscopic instrumentation: applications to biology and chemistry,” Rev. Sci. Instrum. 75, 1–16 (2004).
[Crossref]

Baek, S.-H.

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graphics 38, 117 (2019).
[Crossref]

S.-H. Baek, I. Kim, D. Gutierrez, and M. H. Kim, “Compact single-shot hyperspectral imaging using a prism,” ACM Trans. Graphics 36, 217 (2017).
[Crossref]

Baraniuk, R. G.

M. S. Asif, A. Ayremlou, A. Sankaranarayanan, A. Veeraraghavan, and R. G. Baraniuk, “FlatCam: thin, lensless cameras using coded aperture and computation,” IEEE Trans. Comput. Imaging 3, 384–397 (2016).
[Crossref]

Beck, A.

A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci. 2, 183–202 (2009).
[Crossref]

Bedard, N.

R. T. Kester, N. Bedard, L. S. Gao, and T. S. Tkaczyk, “Real-time snapshot hyperspectral imaging endoscope,” J. Biomed. Opt. 16, 056005 (2011).
[Crossref]

Bora, A.

A. Bora, A. Jalal, E. Price, and A. G. Dimakis, “Compressed sensing using generative models,” arXiv:1703.03208 (2017).

Bostan, E.

N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5, 1–9 (2018).
[Crossref]

N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from stills: lensless imaging with rolling shutter,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2019), pp. 1–8.

Boyd, S.

V. Sitzmann, S. Diamond, Y. Peng, X. Dun, S. Boyd, W. Heidrich, F. Heide, and G. Wetzstein, “End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging,” ACM Trans. Graphics 37, 114 (2018).
[Crossref]

S. Diamond, V. Sitzmann, S. Boyd, G. Wetzstein, and F. Heide, “Dirty pixels: optimizing image classification architectures for raw sensor data,” arXiv:1701.06487 (2017).

Bradley, R.

S. Saxe, L. Sun, V. Smith, D. Meysing, C. Hsiung, A. Houck, M. Von Gunten, C. Hruska, D. Martin, R. Bradley, J. Amoroso, M. Klimek, and W. Houck, “Advances in miniaturized spectral sensors,” Proc. SPIE 10657, 106570B (2018).
[Crossref]

Brady, D.

Brady, D. J.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process. Mag. 33, 95–108 (2016).
[Crossref]

M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Opt. Express 15, 14013–14027 (2007).
[Crossref]

Breitbarth, A.

C. Zhang, M. Rosenberger, A. Breitbarth, and G. Notni, “A novel 3D multispectral vision system based on filter wheel cameras,” in IEEE International Conference on Imaging Systems and Techniques (IST) (IEEE, 2016), pp. 267–272.

Candès, E. J.

E. J. Candès and C. Fernandez-Granda, “Towards a mathematical theory of super-resolution,” Commun. Pure Appl. Math. 67, 906–956 (2014).
[Crossref]

Cao, H.

B. Redding, S. F. Liew, R. Sarma, and H. Cao, “Compact spectrometer based on a disordered photonic chip,” Nat. Photonics 7, 746–751 (2013).
[Crossref]

Cao, X.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process. Mag. 33, 95–108 (2016).
[Crossref]

Carin, L.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process. Mag. 33, 95–108 (2016).
[Crossref]

Chakrabarti, M.

Chan, D.

K. Chao, C. Yang, Y. Chen, M. Kim, and D. Chan, “Hyperspectral-multispectral line-scan imaging system for automated poultry carcass inspection applications for food safety,” Poult. Sci. 86, 2450–2460 (2007).
[Crossref]

Chao, K.

K. Chao, C. Yang, Y. Chen, M. Kim, and D. Chan, “Hyperspectral-multispectral line-scan imaging system for automated poultry carcass inspection applications for food safety,” Poult. Sci. 86, 2450–2460 (2007).
[Crossref]

Chen, A.

Z. Wang, S. Yi, A. Chen, M. Zhou, T. S. Luk, A. James, J. Nogan, W. Ross, G. Joe, A. Shahsafi, K. X. Wang, M. A. Kats, and Z. Yu, “Single-shot on-chip spectral sensors based on photonic crystal slabs,” Nat. Commun. 10, 1020 (2019).
[Crossref]

Chen, G.

H. Akbari, L. Halig, D. M. Schuster, B. Fei, A. Osunkoya, V. Master, P. Nieh, and G. Chen, “Hyperspectral imaging and quantitative analysis for prostate cancer detection,” J. Biomed. Opt. 17, 0760051 (2012).
[Crossref]

Chen, L.

W. Huang, J. Li, Q. Wang, and L. Chen, “Development of a multispectral imaging system for online detection of bruises on apples,” J. Food Eng. 146, 62–71 (2015).
[Crossref]

Chen, Y.

K. Chao, C. Yang, Y. Chen, M. Kim, and D. Chan, “Hyperspectral-multispectral line-scan imaging system for automated poultry carcass inspection applications for food safety,” Poult. Sci. 86, 2450–2460 (2007).
[Crossref]

Chen, Z. G.

G. Lu, L. V. Halig, D. Wang, X. Qin, Z. G. Chen, and B. Fei, “Spectral-spatial classification for noninvasive cancer detection using hyperspectral imaging,” J. Biomed. Opt. 19, 106004 (2014).
[Crossref]

Chippendale, B. J.

R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, and M. R. Olah, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998).
[Crossref]

Chovit, C. J.

R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, and M. R. Olah, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998).
[Crossref]

Chrien, T. G.

R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, and M. R. Olah, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998).
[Crossref]

Clarke, K.

A. Hennessy, K. Clarke, and M. Lewis, “Hyperspectral classification of plants: a review of waveband selection generalisability,” Remote Sens. 12, 113 (2020).
[Crossref]

Coppin, P.

S. Delalieux, A. Auwerkerken, W. W. Verstraeten, B. Somers, R. Valcke, S. Lhermitte, J. Keulemans, and P. Coppin, “Hyperspectral reflectance and fluorescence imaging to detect scab induced stress in apple leaves,” Remote Sens. 1, 858–874 (2009).
[Crossref]

Cullen, P.

A. Gowen, C. O’Donnell, P. Cullen, G. Downey, and J. Frias, “Hyperspectral imaging–an emerging process analytical tool for food quality and safety control,” Trends Food Sci. Technol. 18, 590–598 (2007).
[Crossref]

Dai, Q.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process. Mag. 33, 95–108 (2016).
[Crossref]

X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Trans. Graphics 33, 233 (2014).
[Crossref]

Dang, C.

DeFrece, R.

C. P. Bacon, Y. Mattley, and R. DeFrece, “Miniature spectroscopic instrumentation: applications to biology and chemistry,” Rev. Sci. Instrum. 75, 1–16 (2004).
[Crossref]

Delalieux, S.

S. Delalieux, A. Auwerkerken, W. W. Verstraeten, B. Somers, R. Valcke, S. Lhermitte, J. Keulemans, and P. Coppin, “Hyperspectral reflectance and fluorescence imaging to detect scab induced stress in apple leaves,” Remote Sens. 1, 858–874 (2009).
[Crossref]

Diamond, S.

V. Sitzmann, S. Diamond, Y. Peng, X. Dun, S. Boyd, W. Heidrich, F. Heide, and G. Wetzstein, “End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging,” ACM Trans. Graphics 37, 114 (2018).
[Crossref]

S. Diamond, V. Sitzmann, S. Boyd, G. Wetzstein, and F. Heide, “Dirty pixels: optimizing image classification architectures for raw sensor data,” arXiv:1701.06487 (2017).

Dimakis, A. G.

A. Bora, A. Jalal, E. Price, and A. G. Dimakis, “Compressed sensing using generative models,” arXiv:1703.03208 (2017).

Downey, G.

A. Gowen, C. O’Donnell, P. Cullen, G. Downey, and J. Frias, “Hyperspectral imaging–an emerging process analytical tool for food quality and safety control,” Trends Food Sci. Technol. 18, 590–598 (2007).
[Crossref]

Dun, X.

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graphics 38, 117 (2019).
[Crossref]

Y. Peng, Q. Sun, X. Dun, G. Wetzstein, W. Heidrich, and F. Heide, “Learned large field-of-view imaging with thin-plate optics,” ACM Trans. Graphics 38, 219 (2019).
[Crossref]

V. Sitzmann, S. Diamond, Y. Peng, X. Dun, S. Boyd, W. Heidrich, F. Heide, and G. Wetzstein, “End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging,” ACM Trans. Graphics 37, 114 (2018).
[Crossref]

Eastwood, M. L.

R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, and M. R. Olah, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998).
[Crossref]

Faust, J. A.

R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, and M. R. Olah, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998).
[Crossref]

Fei, B.

G. Lu, L. V. Halig, D. Wang, X. Qin, Z. G. Chen, and B. Fei, “Spectral-spatial classification for noninvasive cancer detection using hyperspectral imaging,” J. Biomed. Opt. 19, 106004 (2014).
[Crossref]

G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” J. Biomed. Opt. 19, 010901 (2014).
[Crossref]

H. Akbari, L. Halig, D. M. Schuster, B. Fei, A. Osunkoya, V. Master, P. Nieh, and G. Chen, “Hyperspectral imaging and quantitative analysis for prostate cancer detection,” J. Biomed. Opt. 17, 0760051 (2012).
[Crossref]

Fergus, R.

R. Fergus, A. Torralba, and W. T. Freeman, “Random lens imaging,” MIT CSAIL Technical Report 2006-058 (2006).

Fernandez-Granda, C.

E. J. Candès and C. Fernandez-Granda, “Towards a mathematical theory of super-resolution,” Commun. Pure Appl. Math. 67, 906–956 (2014).
[Crossref]

Freeman, W. T.

R. Fergus, A. Torralba, and W. T. Freeman, “Random lens imaging,” MIT CSAIL Technical Report 2006-058 (2006).

French, R.

Frias, J.

A. Gowen, C. O’Donnell, P. Cullen, G. Downey, and J. Frias, “Hyperspectral imaging–an emerging process analytical tool for food quality and safety control,” Trends Food Sci. Technol. 18, 590–598 (2007).
[Crossref]

Fu, Q.

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graphics 38, 117 (2019).
[Crossref]

Gao, L. S.

R. T. Kester, N. Bedard, L. S. Gao, and T. S. Tkaczyk, “Real-time snapshot hyperspectral imaging endoscope,” J. Biomed. Opt. 16, 056005 (2011).
[Crossref]

Gat, N.

N. Gat, “Imaging spectroscopy using tunable filters: a review,” Proc. SPIE 4056, 50–64 (2000).
[Crossref]

Gehm, M. E.

Ghosh, R. N.

Gigan, S.

Golub, M. A.

Gouton, P.

P.-J. Lapray, X. Wang, J.-B. Thomas, and P. Gouton, “Multispectral filter arrays: recent advances and practical implementation,” Sensors 14, 21626–21659 (2014).
[Crossref]

Gowen, A.

A. Gowen, C. O’Donnell, P. Cullen, G. Downey, and J. Frias, “Hyperspectral imaging–an emerging process analytical tool for food quality and safety control,” Trends Food Sci. Technol. 18, 590–598 (2007).
[Crossref]

Green, R. O.

R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, and M. R. Olah, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998).
[Crossref]

Gurevitch, S.

Gutierrez, D.

S.-H. Baek, I. Kim, D. Gutierrez, and M. H. Kim, “Compact single-shot hyperspectral imaging using a prism,” ACM Trans. Graphics 36, 217 (2017).
[Crossref]

Halig, L.

H. Akbari, L. Halig, D. M. Schuster, B. Fei, A. Osunkoya, V. Master, P. Nieh, and G. Chen, “Hyperspectral imaging and quantitative analysis for prostate cancer detection,” J. Biomed. Opt. 17, 0760051 (2012).
[Crossref]

Halig, L. V.

G. Lu, L. V. Halig, D. Wang, X. Qin, Z. G. Chen, and B. Fei, “Spectral-spatial classification for noninvasive cancer detection using hyperspectral imaging,” J. Biomed. Opt. 19, 106004 (2014).
[Crossref]

Hanson, S. G.

Hauser, J.

Heckel, R.

Heide, F.

Y. Peng, Q. Sun, X. Dun, G. Wetzstein, W. Heidrich, and F. Heide, “Learned large field-of-view imaging with thin-plate optics,” ACM Trans. Graphics 38, 219 (2019).
[Crossref]

V. Sitzmann, S. Diamond, Y. Peng, X. Dun, S. Boyd, W. Heidrich, F. Heide, and G. Wetzstein, “End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging,” ACM Trans. Graphics 37, 114 (2018).
[Crossref]

S. Diamond, V. Sitzmann, S. Boyd, G. Wetzstein, and F. Heide, “Dirty pixels: optimizing image classification architectures for raw sensor data,” arXiv:1701.06487 (2017).

Heidrich, W.

Y. Peng, Q. Sun, X. Dun, G. Wetzstein, W. Heidrich, and F. Heide, “Learned large field-of-view imaging with thin-plate optics,” ACM Trans. Graphics 38, 219 (2019).
[Crossref]

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graphics 38, 117 (2019).
[Crossref]

V. Sitzmann, S. Diamond, Y. Peng, X. Dun, S. Boyd, W. Heidrich, F. Heide, and G. Wetzstein, “End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging,” ACM Trans. Graphics 37, 114 (2018).
[Crossref]

Hennessy, A.

A. Hennessy, K. Clarke, and M. Lewis, “Hyperspectral classification of plants: a review of waveband selection generalisability,” Remote Sens. 12, 113 (2020).
[Crossref]

Houck, A.

S. Saxe, L. Sun, V. Smith, D. Meysing, C. Hsiung, A. Houck, M. Von Gunten, C. Hruska, D. Martin, R. Bradley, J. Amoroso, M. Klimek, and W. Houck, “Advances in miniaturized spectral sensors,” Proc. SPIE 10657, 106570B (2018).
[Crossref]

Houck, W.

S. Saxe, L. Sun, V. Smith, D. Meysing, C. Hsiung, A. Houck, M. Von Gunten, C. Hruska, D. Martin, R. Bradley, J. Amoroso, M. Klimek, and W. Houck, “Advances in miniaturized spectral sensors,” Proc. SPIE 10657, 106570B (2018).
[Crossref]

Hruska, C.

S. Saxe, L. Sun, V. Smith, D. Meysing, C. Hsiung, A. Houck, M. Von Gunten, C. Hruska, D. Martin, R. Bradley, J. Amoroso, M. Klimek, and W. Houck, “Advances in miniaturized spectral sensors,” Proc. SPIE 10657, 106570B (2018).
[Crossref]

Hsiung, C.

S. Saxe, L. Sun, V. Smith, D. Meysing, C. Hsiung, A. Houck, M. Von Gunten, C. Hruska, D. Martin, R. Bradley, J. Amoroso, M. Klimek, and W. Houck, “Advances in miniaturized spectral sensors,” Proc. SPIE 10657, 106570B (2018).
[Crossref]

Huang, W.

W. Huang, J. Li, Q. Wang, and L. Chen, “Development of a multispectral imaging system for online detection of bruises on apples,” J. Food Eng. 146, 62–71 (2015).
[Crossref]

Ichioka, Y.

Ishida, K.

Jakobsen, M. L.

Jalal, A.

A. Bora, A. Jalal, E. Price, and A. G. Dimakis, “Compressed sensing using generative models,” arXiv:1703.03208 (2017).

James, A.

Z. Wang, S. Yi, A. Chen, M. Zhou, T. S. Luk, A. James, J. Nogan, W. Ross, G. Joe, A. Shahsafi, K. X. Wang, M. A. Kats, and Z. Yu, “Single-shot on-chip spectral sensors based on photonic crystal slabs,” Nat. Commun. 10, 1020 (2019).
[Crossref]

Jeon, D. S.

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graphics 38, 117 (2019).
[Crossref]

Joe, G.

Z. Wang, S. Yi, A. Chen, M. Zhou, T. S. Luk, A. James, J. Nogan, W. Ross, G. Joe, A. Shahsafi, K. X. Wang, M. A. Kats, and Z. Yu, “Single-shot on-chip spectral sensors based on photonic crystal slabs,” Nat. Commun. 10, 1020 (2019).
[Crossref]

John, R.

Kagan, A.

Kagan, M.

Kamilov, U. S.

U. S. Kamilov, “A parallel proximal algorithm for anisotropic total variation minimization,” IEEE Trans. Image Process. 26, 539–548 (2016).
[Crossref]

Kats, M. A.

Z. Wang, S. Yi, A. Chen, M. Zhou, T. S. Luk, A. James, J. Nogan, W. Ross, G. Joe, A. Shahsafi, K. X. Wang, M. A. Kats, and Z. Yu, “Single-shot on-chip spectral sensors based on photonic crystal slabs,” Nat. Commun. 10, 1020 (2019).
[Crossref]

Kester, R. T.

R. T. Kester, N. Bedard, L. S. Gao, and T. S. Tkaczyk, “Real-time snapshot hyperspectral imaging endoscope,” J. Biomed. Opt. 16, 056005 (2011).
[Crossref]

Keulemans, J.

S. Delalieux, A. Auwerkerken, W. W. Verstraeten, B. Somers, R. Valcke, S. Lhermitte, J. Keulemans, and P. Coppin, “Hyperspectral reflectance and fluorescence imaging to detect scab induced stress in apple leaves,” Remote Sens. 1, 858–874 (2009).
[Crossref]

Kim, I.

S.-H. Baek, I. Kim, D. Gutierrez, and M. H. Kim, “Compact single-shot hyperspectral imaging using a prism,” ACM Trans. Graphics 36, 217 (2017).
[Crossref]

Kim, M.

K. Chao, C. Yang, Y. Chen, M. Kim, and D. Chan, “Hyperspectral-multispectral line-scan imaging system for automated poultry carcass inspection applications for food safety,” Poult. Sci. 86, 2450–2460 (2007).
[Crossref]

Kim, M. H.

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graphics 38, 117 (2019).
[Crossref]

S.-H. Baek, I. Kim, D. Gutierrez, and M. H. Kim, “Compact single-shot hyperspectral imaging using a prism,” ACM Trans. Graphics 36, 217 (2017).
[Crossref]

Kitamura, Y.

Klimek, M.

S. Saxe, L. Sun, V. Smith, D. Meysing, C. Hsiung, A. Houck, M. Von Gunten, C. Hruska, D. Martin, R. Bradley, J. Amoroso, M. Klimek, and W. Houck, “Advances in miniaturized spectral sensors,” Proc. SPIE 10657, 106570B (2018).
[Crossref]

Kobayashi, H.

R. M. Levenson, D. T. Lynch, H. Kobayashi, J. M. Backer, and M. V. Backer, “Multiplexing with multispectral imaging: from mice to microscopy,” ILAR J. 49, 78–88 (2008).
[Crossref]

Kondou, N.

Kumagai, T.

Kuo, G.

Lapray, P.-J.

P.-J. Lapray, X. Wang, J.-B. Thomas, and P. Gouton, “Multispectral filter arrays: recent advances and practical implementation,” Sensors 14, 21626–21659 (2014).
[Crossref]

Levenson, R. M.

R. M. Levenson, D. T. Lynch, H. Kobayashi, J. M. Backer, and M. V. Backer, “Multiplexing with multispectral imaging: from mice to microscopy,” ILAR J. 49, 78–88 (2008).
[Crossref]

Lewis, M.

A. Hennessy, K. Clarke, and M. Lewis, “Hyperspectral classification of plants: a review of waveband selection generalisability,” Remote Sens. 12, 113 (2020).
[Crossref]

Lhermitte, S.

S. Delalieux, A. Auwerkerken, W. W. Verstraeten, B. Somers, R. Valcke, S. Lhermitte, J. Keulemans, and P. Coppin, “Hyperspectral reflectance and fluorescence imaging to detect scab induced stress in apple leaves,” Remote Sens. 1, 858–874 (2009).
[Crossref]

Li, J.

W. Huang, J. Li, Q. Wang, and L. Chen, “Development of a multispectral imaging system for online detection of bruises on apples,” J. Food Eng. 146, 62–71 (2015).
[Crossref]

Liew, S. F.

B. Redding, S. F. Liew, R. Sarma, and H. Cao, “Compact spectrometer based on a disordered photonic chip,” Nat. Photonics 7, 746–751 (2013).
[Crossref]

Lin, S.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process. Mag. 33, 95–108 (2016).
[Crossref]

Lin, X.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process. Mag. 33, 95–108 (2016).
[Crossref]

X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Trans. Graphics 33, 233 (2014).
[Crossref]

Liu, Y.

X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Trans. Graphics 33, 233 (2014).
[Crossref]

Liu, Z.

Z. Liu and J. Scarlett, “Information-theoretic lower bounds for compressive sensing with generative models,” IEEE J. Sel. Areas Inf. Theory 1, 292–303 (2020).
[Crossref]

Losson, O.

S. Mihoubi, O. Losson, B. Mathon, and L. Macaire, “Multispectral demosaicing using pseudo-panchromatic image,” IEEE Trans. Comput. Imaging 3, 982–995 (2017).
[Crossref]

Lu, G.

G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” J. Biomed. Opt. 19, 010901 (2014).
[Crossref]

G. Lu, L. V. Halig, D. Wang, X. Qin, Z. G. Chen, and B. Fei, “Spectral-spatial classification for noninvasive cancer detection using hyperspectral imaging,” J. Biomed. Opt. 19, 106004 (2014).
[Crossref]

Luk, T. S.

Z. Wang, S. Yi, A. Chen, M. Zhou, T. S. Luk, A. James, J. Nogan, W. Ross, G. Joe, A. Shahsafi, K. X. Wang, M. A. Kats, and Z. Yu, “Single-shot on-chip spectral sensors based on photonic crystal slabs,” Nat. Commun. 10, 1020 (2019).
[Crossref]

Lynch, D. T.

R. M. Levenson, D. T. Lynch, H. Kobayashi, J. M. Backer, and M. V. Backer, “Multiplexing with multispectral imaging: from mice to microscopy,” ILAR J. 49, 78–88 (2008).
[Crossref]

Macaire, L.

S. Mihoubi, O. Losson, B. Mathon, and L. Macaire, “Multispectral demosaicing using pseudo-panchromatic image,” IEEE Trans. Comput. Imaging 3, 982–995 (2017).
[Crossref]

Malinsky, R.

Martin, D.

S. Saxe, L. Sun, V. Smith, D. Meysing, C. Hsiung, A. Houck, M. Von Gunten, C. Hruska, D. Martin, R. Bradley, J. Amoroso, M. Klimek, and W. Houck, “Advances in miniaturized spectral sensors,” Proc. SPIE 10657, 106570B (2018).
[Crossref]

Master, V.

H. Akbari, L. Halig, D. M. Schuster, B. Fei, A. Osunkoya, V. Master, P. Nieh, and G. Chen, “Hyperspectral imaging and quantitative analysis for prostate cancer detection,” J. Biomed. Opt. 17, 0760051 (2012).
[Crossref]

Mathon, B.

S. Mihoubi, O. Losson, B. Mathon, and L. Macaire, “Multispectral demosaicing using pseudo-panchromatic image,” IEEE Trans. Comput. Imaging 3, 982–995 (2017).
[Crossref]

Mattley, Y.

C. P. Bacon, Y. Mattley, and R. DeFrece, “Miniature spectroscopic instrumentation: applications to biology and chemistry,” Rev. Sci. Instrum. 75, 1–16 (2004).
[Crossref]

Meysing, D.

S. Saxe, L. Sun, V. Smith, D. Meysing, C. Hsiung, A. Houck, M. Von Gunten, C. Hruska, D. Martin, R. Bradley, J. Amoroso, M. Klimek, and W. Houck, “Advances in miniaturized spectral sensors,” Proc. SPIE 10657, 106570B (2018).
[Crossref]

Mihoubi, S.

S. Mihoubi, O. Losson, B. Mathon, and L. Macaire, “Multispectral demosaicing using pseudo-panchromatic image,” IEEE Trans. Comput. Imaging 3, 982–995 (2017).
[Crossref]

Mildenhall, B.

Miyamoto, M.

Miyatake, S.

Miyazaki, D.

Monakhova, K.

Morimoto, T.

Muskens, O. L.

Nathan, M.

Ng, R.

N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5, 1–9 (2018).
[Crossref]

G. Kuo, N. Antipa, R. Ng, and L. Waller, “Diffusercam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging (Optical Society of America, 2017), paper CTu3B–2.

N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from stills: lensless imaging with rolling shutter,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2019), pp. 1–8.

Nieh, P.

H. Akbari, L. Halig, D. M. Schuster, B. Fei, A. Osunkoya, V. Master, P. Nieh, and G. Chen, “Hyperspectral imaging and quantitative analysis for prostate cancer detection,” J. Biomed. Opt. 17, 0760051 (2012).
[Crossref]

Nogan, J.

Z. Wang, S. Yi, A. Chen, M. Zhou, T. S. Luk, A. James, J. Nogan, W. Ross, G. Joe, A. Shahsafi, K. X. Wang, M. A. Kats, and Z. Yu, “Single-shot on-chip spectral sensors based on photonic crystal slabs,” Nat. Commun. 10, 1020 (2019).
[Crossref]

Notni, G.

C. Zhang, M. Rosenberger, A. Breitbarth, and G. Notni, “A novel 3D multispectral vision system based on filter wheel cameras,” in IEEE International Conference on Imaging Systems and Techniques (IST) (IEEE, 2016), pp. 267–272.

O’Donnell, C.

A. Gowen, C. O’Donnell, P. Cullen, G. Downey, and J. Frias, “Hyperspectral imaging–an emerging process analytical tool for food quality and safety control,” Trends Food Sci. Technol. 18, 590–598 (2007).
[Crossref]

Oare, P.

N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from stills: lensless imaging with rolling shutter,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2019), pp. 1–8.

Olah, M. R.

R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, and M. R. Olah, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998).
[Crossref]

Orth, A.

Osunkoya, A.

H. Akbari, L. Halig, D. M. Schuster, B. Fei, A. Osunkoya, V. Master, P. Nieh, and G. Chen, “Hyperspectral imaging and quantitative analysis for prostate cancer detection,” J. Biomed. Opt. 17, 0760051 (2012).
[Crossref]

Pavri, B. E.

R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, and M. R. Olah, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998).
[Crossref]

Peng, Y.

Y. Peng, Q. Sun, X. Dun, G. Wetzstein, W. Heidrich, and F. Heide, “Learned large field-of-view imaging with thin-plate optics,” ACM Trans. Graphics 38, 219 (2019).
[Crossref]

V. Sitzmann, S. Diamond, Y. Peng, X. Dun, S. Boyd, W. Heidrich, F. Heide, and G. Wetzstein, “End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging,” ACM Trans. Graphics 37, 114 (2018).
[Crossref]

Price, E.

A. Bora, A. Jalal, E. Price, and A. G. Dimakis, “Compressed sensing using generative models,” arXiv:1703.03208 (2017).

Qin, X.

G. Lu, L. V. Halig, D. Wang, X. Qin, Z. G. Chen, and B. Fei, “Spectral-spatial classification for noninvasive cancer detection using hyperspectral imaging,” J. Biomed. Opt. 19, 106004 (2014).
[Crossref]

Redding, B.

B. Redding, S. F. Liew, R. Sarma, and H. Cao, “Compact spectrometer based on a disordered photonic chip,” Nat. Photonics 7, 746–751 (2013).
[Crossref]

Rosenberger, M.

C. Zhang, M. Rosenberger, A. Breitbarth, and G. Notni, “A novel 3D multispectral vision system based on filter wheel cameras,” in IEEE International Conference on Imaging Systems and Techniques (IST) (IEEE, 2016), pp. 267–272.

Ross, W.

Z. Wang, S. Yi, A. Chen, M. Zhou, T. S. Luk, A. James, J. Nogan, W. Ross, G. Joe, A. Shahsafi, K. X. Wang, M. A. Kats, and Z. Yu, “Single-shot on-chip spectral sensors based on photonic crystal slabs,” Nat. Commun. 10, 1020 (2019).
[Crossref]

Sahoo, S. K.

Sankaranarayanan, A.

M. S. Asif, A. Ayremlou, A. Sankaranarayanan, A. Veeraraghavan, and R. G. Baraniuk, “FlatCam: thin, lensless cameras using coded aperture and computation,” IEEE Trans. Comput. Imaging 3, 384–397 (2016).
[Crossref]

Sankaranarayanan, A. C.

V. Saragadam and A. C. Sankaranarayanan, “Programmable spectrometry: per-pixel material classification using learned spectral filters,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2020), pp. 1–10.

Saragadam, V.

V. Saragadam and A. C. Sankaranarayanan, “Programmable spectrometry: per-pixel material classification using learned spectral filters,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2020), pp. 1–10.

Sarma, R.

B. Redding, S. F. Liew, R. Sarma, and H. Cao, “Compact spectrometer based on a disordered photonic chip,” Nat. Photonics 7, 746–751 (2013).
[Crossref]

Sarture, C. M.

R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, and M. R. Olah, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998).
[Crossref]

Saxe, S.

S. Saxe, L. Sun, V. Smith, D. Meysing, C. Hsiung, A. Houck, M. Von Gunten, C. Hruska, D. Martin, R. Bradley, J. Amoroso, M. Klimek, and W. Houck, “Advances in miniaturized spectral sensors,” Proc. SPIE 10657, 106570B (2018).
[Crossref]

Scarlett, J.

Z. Liu and J. Scarlett, “Information-theoretic lower bounds for compressive sensing with generative models,” IEEE J. Sel. Areas Inf. Theory 1, 292–303 (2020).
[Crossref]

Schonbrun, E.

Schulz, T. J.

Schuster, D. M.

H. Akbari, L. Halig, D. M. Schuster, B. Fei, A. Osunkoya, V. Master, P. Nieh, and G. Chen, “Hyperspectral imaging and quantitative analysis for prostate cancer detection,” J. Biomed. Opt. 17, 0760051 (2012).
[Crossref]

Shahsafi, A.

Z. Wang, S. Yi, A. Chen, M. Zhou, T. S. Luk, A. James, J. Nogan, W. Ross, G. Joe, A. Shahsafi, K. X. Wang, M. A. Kats, and Z. Yu, “Single-shot on-chip spectral sensors based on photonic crystal slabs,” Nat. Commun. 10, 1020 (2019).
[Crossref]

Shogenji, R.

Sitzmann, V.

V. Sitzmann, S. Diamond, Y. Peng, X. Dun, S. Boyd, W. Heidrich, F. Heide, and G. Wetzstein, “End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging,” ACM Trans. Graphics 37, 114 (2018).
[Crossref]

S. Diamond, V. Sitzmann, S. Boyd, G. Wetzstein, and F. Heide, “Dirty pixels: optimizing image classification architectures for raw sensor data,” arXiv:1701.06487 (2017).

Smith, V.

S. Saxe, L. Sun, V. Smith, D. Meysing, C. Hsiung, A. Houck, M. Von Gunten, C. Hruska, D. Martin, R. Bradley, J. Amoroso, M. Klimek, and W. Houck, “Advances in miniaturized spectral sensors,” Proc. SPIE 10657, 106570B (2018).
[Crossref]

Solis, M.

R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, and M. R. Olah, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998).
[Crossref]

Somers, B.

S. Delalieux, A. Auwerkerken, W. W. Verstraeten, B. Somers, R. Valcke, S. Lhermitte, J. Keulemans, and P. Coppin, “Hyperspectral reflectance and fluorescence imaging to detect scab induced stress in apple leaves,” Remote Sens. 1, 858–874 (2009).
[Crossref]

Sun, D.-W.

D.-W. Sun, Hyperspectral Imaging for Food Quality Analysis and Control (Elsevier, 2010).

Sun, L.

S. Saxe, L. Sun, V. Smith, D. Meysing, C. Hsiung, A. Houck, M. Von Gunten, C. Hruska, D. Martin, R. Bradley, J. Amoroso, M. Klimek, and W. Houck, “Advances in miniaturized spectral sensors,” Proc. SPIE 10657, 106570B (2018).
[Crossref]

Sun, Q.

Y. Peng, Q. Sun, X. Dun, G. Wetzstein, W. Heidrich, and F. Heide, “Learned large field-of-view imaging with thin-plate optics,” ACM Trans. Graphics 38, 219 (2019).
[Crossref]

Tang, D.

Tanida, J.

Teboulle, M.

A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci. 2, 183–202 (2009).
[Crossref]

Thomas, J.-B.

P.-J. Lapray, X. Wang, J.-B. Thomas, and P. Gouton, “Multispectral filter arrays: recent advances and practical implementation,” Sensors 14, 21626–21659 (2014).
[Crossref]

Tkaczyk, T. S.

R. T. Kester, N. Bedard, L. S. Gao, and T. S. Tkaczyk, “Real-time snapshot hyperspectral imaging endoscope,” J. Biomed. Opt. 16, 056005 (2011).
[Crossref]

Tomaszewski, M. J.

Torralba, A.

R. Fergus, A. Torralba, and W. T. Freeman, “Random lens imaging,” MIT CSAIL Technical Report 2006-058 (2006).

Valcke, R.

S. Delalieux, A. Auwerkerken, W. W. Verstraeten, B. Somers, R. Valcke, S. Lhermitte, J. Keulemans, and P. Coppin, “Hyperspectral reflectance and fluorescence imaging to detect scab induced stress in apple leaves,” Remote Sens. 1, 858–874 (2009).
[Crossref]

Veeraraghavan, A.

M. S. Asif, A. Ayremlou, A. Sankaranarayanan, A. Veeraraghavan, and R. G. Baraniuk, “FlatCam: thin, lensless cameras using coded aperture and computation,” IEEE Trans. Comput. Imaging 3, 384–397 (2016).
[Crossref]

Verstraeten, W. W.

S. Delalieux, A. Auwerkerken, W. W. Verstraeten, B. Somers, R. Valcke, S. Lhermitte, J. Keulemans, and P. Coppin, “Hyperspectral reflectance and fluorescence imaging to detect scab induced stress in apple leaves,” Remote Sens. 1, 858–874 (2009).
[Crossref]

Von Gunten, M.

S. Saxe, L. Sun, V. Smith, D. Meysing, C. Hsiung, A. Houck, M. Von Gunten, C. Hruska, D. Martin, R. Bradley, J. Amoroso, M. Klimek, and W. Houck, “Advances in miniaturized spectral sensors,” Proc. SPIE 10657, 106570B (2018).
[Crossref]

Wagadarikar, A.

Waller, L.

K. Monakhova, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, and L. Waller, “Learned reconstructions for practical mask-based lensless imaging,” Opt. Express 27, 28075–28090 (2019).
[Crossref]

N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5, 1–9 (2018).
[Crossref]

G. Kuo, N. Antipa, R. Ng, and L. Waller, “Diffusercam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging (Optical Society of America, 2017), paper CTu3B–2.

N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from stills: lensless imaging with rolling shutter,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2019), pp. 1–8.

Wang, D.

G. Lu, L. V. Halig, D. Wang, X. Qin, Z. G. Chen, and B. Fei, “Spectral-spatial classification for noninvasive cancer detection using hyperspectral imaging,” J. Biomed. Opt. 19, 106004 (2014).
[Crossref]

Wang, K. X.

Z. Wang, S. Yi, A. Chen, M. Zhou, T. S. Luk, A. James, J. Nogan, W. Ross, G. Joe, A. Shahsafi, K. X. Wang, M. A. Kats, and Z. Yu, “Single-shot on-chip spectral sensors based on photonic crystal slabs,” Nat. Commun. 10, 1020 (2019).
[Crossref]

Wang, Q.

W. Huang, J. Li, Q. Wang, and L. Chen, “Development of a multispectral imaging system for online detection of bruises on apples,” J. Food Eng. 146, 62–71 (2015).
[Crossref]

Wang, X.

P.-J. Lapray, X. Wang, J.-B. Thomas, and P. Gouton, “Multispectral filter arrays: recent advances and practical implementation,” Sensors 14, 21626–21659 (2014).
[Crossref]

Wang, Z.

Z. Wang, S. Yi, A. Chen, M. Zhou, T. S. Luk, A. James, J. Nogan, W. Ross, G. Joe, A. Shahsafi, K. X. Wang, M. A. Kats, and Z. Yu, “Single-shot on-chip spectral sensors based on photonic crystal slabs,” Nat. Commun. 10, 1020 (2019).
[Crossref]

Z. Wang and Z. Yu, “Spectral analysis based on compressive sensing in nanophotonic structures,” Opt. Express 22, 25608–25614 (2014).
[Crossref]

Wetzstein, G.

Y. Peng, Q. Sun, X. Dun, G. Wetzstein, W. Heidrich, and F. Heide, “Learned large field-of-view imaging with thin-plate optics,” ACM Trans. Graphics 38, 219 (2019).
[Crossref]

V. Sitzmann, S. Diamond, Y. Peng, X. Dun, S. Boyd, W. Heidrich, F. Heide, and G. Wetzstein, “End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging,” ACM Trans. Graphics 37, 114 (2018).
[Crossref]

S. Diamond, V. Sitzmann, S. Boyd, G. Wetzstein, and F. Heide, “Dirty pixels: optimizing image classification architectures for raw sensor data,” arXiv:1701.06487 (2017).

Willett, R.

Willett, R. M.

Wu, J.

X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Trans. Graphics 33, 233 (2014).
[Crossref]

Yamada, K.

Yang, C.

K. Chao, C. Yang, Y. Chen, M. Kim, and D. Chan, “Hyperspectral-multispectral line-scan imaging system for automated poultry carcass inspection applications for food safety,” Poult. Sci. 86, 2450–2460 (2007).
[Crossref]

Yanny, K.

Yi, S.

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graphics 38, 117 (2019).
[Crossref]

Z. Wang, S. Yi, A. Chen, M. Zhou, T. S. Luk, A. James, J. Nogan, W. Ross, G. Joe, A. Shahsafi, K. X. Wang, M. A. Kats, and Z. Yu, “Single-shot on-chip spectral sensors based on photonic crystal slabs,” Nat. Commun. 10, 1020 (2019).
[Crossref]

Yu, Z.

Z. Wang, S. Yi, A. Chen, M. Zhou, T. S. Luk, A. James, J. Nogan, W. Ross, G. Joe, A. Shahsafi, K. X. Wang, M. A. Kats, and Z. Yu, “Single-shot on-chip spectral sensors based on photonic crystal slabs,” Nat. Commun. 10, 1020 (2019).
[Crossref]

Z. Wang and Z. Yu, “Spectral analysis based on compressive sensing in nanophotonic structures,” Opt. Express 22, 25608–25614 (2014).
[Crossref]

Yuan, X.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process. Mag. 33, 95–108 (2016).
[Crossref]

Yue, T.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process. Mag. 33, 95–108 (2016).
[Crossref]

Yurtsever, J.

Zhang, C.

C. Zhang, M. Rosenberger, A. Breitbarth, and G. Notni, “A novel 3D multispectral vision system based on filter wheel cameras,” in IEEE International Conference on Imaging Systems and Techniques (IST) (IEEE, 2016), pp. 267–272.

Zheludev, V. A.

Zhou, M.

Z. Wang, S. Yi, A. Chen, M. Zhou, T. S. Luk, A. James, J. Nogan, W. Ross, G. Joe, A. Shahsafi, K. X. Wang, M. A. Kats, and Z. Yu, “Single-shot on-chip spectral sensors based on photonic crystal slabs,” Nat. Commun. 10, 1020 (2019).
[Crossref]

ACM Trans. Graphics (5)

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graphics 38, 117 (2019).
[Crossref]

X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Trans. Graphics 33, 233 (2014).
[Crossref]

S.-H. Baek, I. Kim, D. Gutierrez, and M. H. Kim, “Compact single-shot hyperspectral imaging using a prism,” ACM Trans. Graphics 36, 217 (2017).
[Crossref]

V. Sitzmann, S. Diamond, Y. Peng, X. Dun, S. Boyd, W. Heidrich, F. Heide, and G. Wetzstein, “End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging,” ACM Trans. Graphics 37, 114 (2018).
[Crossref]

Y. Peng, Q. Sun, X. Dun, G. Wetzstein, W. Heidrich, and F. Heide, “Learned large field-of-view imaging with thin-plate optics,” ACM Trans. Graphics 38, 219 (2019).
[Crossref]

Appl. Opt. (4)

Commun. Pure Appl. Math. (1)

E. J. Candès and C. Fernandez-Granda, “Towards a mathematical theory of super-resolution,” Commun. Pure Appl. Math. 67, 906–956 (2014).
[Crossref]

IEEE J. Sel. Areas Inf. Theory (1)

Z. Liu and J. Scarlett, “Information-theoretic lower bounds for compressive sensing with generative models,” IEEE J. Sel. Areas Inf. Theory 1, 292–303 (2020).
[Crossref]

IEEE Signal Process. Mag. (1)

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process. Mag. 33, 95–108 (2016).
[Crossref]

IEEE Trans. Comput. Imaging (2)

M. S. Asif, A. Ayremlou, A. Sankaranarayanan, A. Veeraraghavan, and R. G. Baraniuk, “FlatCam: thin, lensless cameras using coded aperture and computation,” IEEE Trans. Comput. Imaging 3, 384–397 (2016).
[Crossref]

S. Mihoubi, O. Losson, B. Mathon, and L. Macaire, “Multispectral demosaicing using pseudo-panchromatic image,” IEEE Trans. Comput. Imaging 3, 982–995 (2017).
[Crossref]

IEEE Trans. Image Process. (1)

U. S. Kamilov, “A parallel proximal algorithm for anisotropic total variation minimization,” IEEE Trans. Image Process. 26, 539–548 (2016).
[Crossref]

ILAR J. (1)

R. M. Levenson, D. T. Lynch, H. Kobayashi, J. M. Backer, and M. V. Backer, “Multiplexing with multispectral imaging: from mice to microscopy,” ILAR J. 49, 78–88 (2008).
[Crossref]

J. Biomed. Opt. (4)

R. T. Kester, N. Bedard, L. S. Gao, and T. S. Tkaczyk, “Real-time snapshot hyperspectral imaging endoscope,” J. Biomed. Opt. 16, 056005 (2011).
[Crossref]

G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” J. Biomed. Opt. 19, 010901 (2014).
[Crossref]

H. Akbari, L. Halig, D. M. Schuster, B. Fei, A. Osunkoya, V. Master, P. Nieh, and G. Chen, “Hyperspectral imaging and quantitative analysis for prostate cancer detection,” J. Biomed. Opt. 17, 0760051 (2012).
[Crossref]

G. Lu, L. V. Halig, D. Wang, X. Qin, Z. G. Chen, and B. Fei, “Spectral-spatial classification for noninvasive cancer detection using hyperspectral imaging,” J. Biomed. Opt. 19, 106004 (2014).
[Crossref]

J. Food Eng. (1)

W. Huang, J. Li, Q. Wang, and L. Chen, “Development of a multispectral imaging system for online detection of bruises on apples,” J. Food Eng. 146, 62–71 (2015).
[Crossref]

Nat. Commun. (1)

Z. Wang, S. Yi, A. Chen, M. Zhou, T. S. Luk, A. James, J. Nogan, W. Ross, G. Joe, A. Shahsafi, K. X. Wang, M. A. Kats, and Z. Yu, “Single-shot on-chip spectral sensors based on photonic crystal slabs,” Nat. Commun. 10, 1020 (2019).
[Crossref]

Nat. Photonics (1)

B. Redding, S. F. Liew, R. Sarma, and H. Cao, “Compact spectrometer based on a disordered photonic chip,” Nat. Photonics 7, 746–751 (2013).
[Crossref]

Opt. Express (4)

Opt. Lett. (2)

Optica (3)

Poult. Sci. (1)

K. Chao, C. Yang, Y. Chen, M. Kim, and D. Chan, “Hyperspectral-multispectral line-scan imaging system for automated poultry carcass inspection applications for food safety,” Poult. Sci. 86, 2450–2460 (2007).
[Crossref]

Proc. SPIE (2)

N. Gat, “Imaging spectroscopy using tunable filters: a review,” Proc. SPIE 4056, 50–64 (2000).
[Crossref]

S. Saxe, L. Sun, V. Smith, D. Meysing, C. Hsiung, A. Houck, M. Von Gunten, C. Hruska, D. Martin, R. Bradley, J. Amoroso, M. Klimek, and W. Houck, “Advances in miniaturized spectral sensors,” Proc. SPIE 10657, 106570B (2018).
[Crossref]

Remote Sens. (2)

S. Delalieux, A. Auwerkerken, W. W. Verstraeten, B. Somers, R. Valcke, S. Lhermitte, J. Keulemans, and P. Coppin, “Hyperspectral reflectance and fluorescence imaging to detect scab induced stress in apple leaves,” Remote Sens. 1, 858–874 (2009).
[Crossref]

A. Hennessy, K. Clarke, and M. Lewis, “Hyperspectral classification of plants: a review of waveband selection generalisability,” Remote Sens. 12, 113 (2020).
[Crossref]

Remote Sens. Environ. (1)

R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, and M. R. Olah, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998).
[Crossref]

Rev. Sci. Instrum. (1)

C. P. Bacon, Y. Mattley, and R. DeFrece, “Miniature spectroscopic instrumentation: applications to biology and chemistry,” Rev. Sci. Instrum. 75, 1–16 (2004).
[Crossref]

Sensors (1)

P.-J. Lapray, X. Wang, J.-B. Thomas, and P. Gouton, “Multispectral filter arrays: recent advances and practical implementation,” Sensors 14, 21626–21659 (2014).
[Crossref]

SIAM J. Imaging Sci. (1)

A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci. 2, 183–202 (2009).
[Crossref]

Trends Food Sci. Technol. (1)

A. Gowen, C. O’Donnell, P. Cullen, G. Downey, and J. Frias, “Hyperspectral imaging–an emerging process analytical tool for food quality and safety control,” Trends Food Sci. Technol. 18, 590–598 (2007).
[Crossref]

Other (8)

D.-W. Sun, Hyperspectral Imaging for Food Quality Analysis and Control (Elsevier, 2010).

C. Zhang, M. Rosenberger, A. Breitbarth, and G. Notni, “A novel 3D multispectral vision system based on filter wheel cameras,” in IEEE International Conference on Imaging Systems and Techniques (IST) (IEEE, 2016), pp. 267–272.

V. Saragadam and A. C. Sankaranarayanan, “Programmable spectrometry: per-pixel material classification using learned spectral filters,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2020), pp. 1–10.

G. Kuo, N. Antipa, R. Ng, and L. Waller, “Diffusercam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging (Optical Society of America, 2017), paper CTu3B–2.

R. Fergus, A. Torralba, and W. T. Freeman, “Random lens imaging,” MIT CSAIL Technical Report 2006-058 (2006).

N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from stills: lensless imaging with rolling shutter,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2019), pp. 1–8.

S. Diamond, V. Sitzmann, S. Boyd, G. Wetzstein, and F. Heide, “Dirty pixels: optimizing image classification architectures for raw sensor data,” arXiv:1701.06487 (2017).

A. Bora, A. Jalal, E. Price, and A. G. Dimakis, “Compressed sensing using generative models,” arXiv:1703.03208 (2017).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Overview of the Spectral DiffuserCam imaging pipeline, which reconstructs a hyperspectral datacube from a single-shot 2D measurement. The system consists of a diffuser and spectral filter array bonded to an image sensor. A one-time calibration procedure measures the point spread function (PSF) and filter function. Images are reconstructed using a nonlinear inverse problem solver with a sparsity prior. The result is a 3D hyperspectral cube with 64 channels of spectral information for each of $448 \times 320$ spatial points, generated from a 2D sensor measurement that is $448 \times 320$ pixels.
Fig. 2.
Fig. 2. Motivation for multiplexing: A high-NA lens captures high-resolution spatial information, but misses the yellow point source, since it comes into focus on a spectral filter pixel designed for blue light. A low-NA lens blurs the image of each point source to be the size of the spectral filter’s super-pixel, capturing accurate spectra at the cost of poor spatial resolution. Our DiffuserCam approach multiplexes the light from each point source across many super-pixels, enabling the computational recovery of both point sources and their spectra without sacrificing spatial resolution. Note that a simplified $3 \times 3$ filter array is shown here for clarity.
Fig. 3.
Fig. 3. Image formation model for a scene with two point sources of different colors, each with narrowband irradiance centered at ${\lambda _y}$ (yellow) and ${\lambda _r}$ (red). The final measurement is the sum of the contributions from each individual spectral filter band in the array. Due to the spatial multiplexing of the lensless architecture, all scene points $\mathbf v(x,y,z)$ project information to multiple spectral filters, which is why we can recover a high-resolution hyperspectral cube from a single image, after solving an inverse problem.
Fig. 4.
Fig. 4. Experimental calibration of Spectral DiffuserCam. (a) Measured PSF is constant across wavelength. The caustic PSF (contrast-stretched and cropped), before passing through the spectral filter array, is similar at all wavelengths. (b) Measured spectrally varying filter function. The spectral response with the filter array only (no diffuser). Top left, full measurement with illumination by a 458 nm plane wave. The filter array consists of $8 \times 8$ grids of spectral filters repeating in $28 \times 20$ super-pixels. Top right, spectral responses of each of the 64 color channels. Bottom, spectral response of a single super-pixel as illumination wavelength is varied with a monochromator.
Fig. 5.
Fig. 5. Spectral resolution analysis. Sample spectra from hyperspectral reconstructions of narrowband point sources, overlaid on top of each other, with shaded lines indicating the ground truth. For each case, the recovered spectral peak matches the true wavelength within 5 nm.
Fig. 6.
Fig. 6. Spatial resolution analysis. (a) The theoretical resolution of our system, defined as the half-width of the autocorrelation peak at 70% its maximum value, is 0.19 super-pixels. (b) Experimental two-point reconstructions demonstrate 0.19 super-pixel resolution across all wavelengths (slices of the reconstruction shown here), matching the theoretical resolution.
Fig. 7.
Fig. 7. Condition number analysis for Spectral DiffuserCam, as compared to a low-NA or high-NA lens. (a) Condition numbers for the 2D spatial case (single spectral channel) are calculated by generating different numbers of points on a 2D grid, each with separation distance $d$. (b) Condition numbers for the full spatio-spectral case are calculated on a 3D grid. A condition number below 40 is considered to be good (shown in green). The diffuser has a consistently better performance for small separation distances than either the low-NA or the high-NA lens. The diffuser can resolve objects as low as 0.3 super-pixels apart for more complex scenes, whereas the low-NA lens requires larger separation distances and the high-NA lens suffers errors due to gaps in the measurement.
Fig. 8.
Fig. 8. Simulated hyperspectral reconstructions comparing our Spectral DiffuserCam result with alternative design options. (a) Resolution target with different sections illuminated by narrowband 634 nm (red), 570 nm (green), 474 nm (blue), and broadband (white) sources (ground truth). (b) Reconstruction of the target by Spectral DiffuserCam, (c) a low-NA lens design, and (d) a high-NA lens design, each showing the raw data, false-colored reconstruction, and $\lambda y$ sum projection. The diffuser achieves higher spatial resolution and better accuracy than the low-NA and the high-NA lens.
Fig. 9.
Fig. 9. (a) Resolution target reconstruction. Experimental reconstruction of a broadband resolution target, showing the $xy$ sum projection (top) and $\lambda y$ sum projection (bottom), demonstrating spatial resolution of 0.3 super-pixels. (b) RGB LED reconstruction. Experimental reconstruction of 10 multi-colored LEDs in a grid with ${\sim}0.4$ super-pixels spacing (four red LEDs on left, four green in middle, two blue at right). We show the $xy$ sum projection (top) and $\lambda y$ sum projection (bottom). The LEDs are clearly resolved spatially and spectrally, and spectral line profiles for each color LED closely match the ground truth spectra from a spectrometer.
Fig. 10.
Fig. 10. Experimental hyperspectral reconstructions. (a)–(c) Reconstructions of color images displayed on a computer monitor and (d) Thorlabs plush toy placed in front of the imager and illuminated by two Halogen lamps. The raw measurement, false color images, $x \lambda$ sum projections, and spectral line profiles for four spatial points are shown for each scene. The ground truth spectral line profiles, measured using a spectrometer, are plotted in black for reference. Spectral line profiles in (a) and (b) show the average and standard deviation spectral profiles across the area of the box or letter in the object, whereas (c) and (d) show a line profile from a single spatial point in the scene.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

L [ x , y ] = λ = 0 K 1 F λ [ x , y ] v [ x , y , λ ] ,
w [ x , y , λ ] = crop ( v [ x , y , λ ] [ x , y ] h [ x , y ] ) ) ,
b = λ = 0 K 1 F λ [ x , y ] crop ( h [ x , y ] [ x , y ] v [ x , y , λ ] )
= λ = 0 K 1 F λ [ x , y ] w [ x , y , λ ]
= A v .
v ^ = argmin v 0 1 2 b A v 2 2 + τ 1 x y λ v 1 + τ 2 v ,

Metrics