Abstract

We introduce and analyze the concept of space-spectrum uncertainty for certain commonly used designs of spectrally programmable cameras. Our key finding states that, it is not possible to simultaneously acquire high-resolution spatial images while programming the spectrum at high resolution. This phenomenon arises due to a Fourier relationship between the aperture used for resolving spectrum and its corresponding diffraction blur in the spatial image. We show that the product of spatial and spectral standard deviations is lower bounded by $\frac {\lambda }{4\pi \nu _0}$ femto square-meters, where ν0 is the density of groves in the diffraction grating and λ is the wavelength of light. Experiments with a lab prototype validate our findings and its implication for spectral programming.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Spectrum is often a unique feature of materials and is used for identification and classification across diverse fields such as geology [1], bio-imaging [2,3] and material identification [4,5]. Tools such as the hyperspectral camera capture the spectrum of a scene which is subsequently used for identification and classification purposes. Capturing the full spectrum, while useful, is also wasteful especially if we are only interested in measuring similarity of the spectral profile at each pixel to a small collection of reference spectra. It is hence useful to have cameras that can optically perform this comparison. Such cameras, called spectrally-programmable cameras, have been demonstrated [6,7] with compelling applications in computer vision. This paper analyzes a popular design for enabling spectral programmability, and derives a fundamental relationship between its achievable spatial and spectral resolutions.

1.1 Problem setting

The analysis in this paper is for the optical setup shown in Fig. 1, commonly used in prior art for spectral programming [4,79]. The optical system consists of a series of lenses of focal length $f$, each subsequent pair separated by $2f$. The setup relays the image plane from plane P1 to P3 with a pupil code in P2. A diffraction grating placed on P3 provides a spectral dispersion of the light. The dispersed light is focused on plane P4 to form the so-called rainbow plane, where each point corresponds to the average intensity of light of the whole scene for a single wavelength. The image on P3 is simply relayed on to plane P5. Arbitrary spectral programming can then be performed by placing an (SLM) on the rainbow plane (P4) and measuring image on plane P5. Intuitively, planes P1 to P3 is a simple camera with an aperture in its Fourier plane, and planes P2 to P4 is a spectrometer with the aperture replacing a slit. The setup provides an insight into the tradeoff between spatial and spectral resolutions. While a camera requires a large and open aperture for a compact spatial blur, this would lead to severe loss in spectral resolvability, as a spectrometer requires a narrow opening. Our goal is to formalize the role played by the shape of the pupil code in deciding spatial and spectral resolution.

 figure: Fig. 1.

Fig. 1. Optical setup for capturing images with spectral programming. P1 is the image plane of the objective lens, P2 contains spatial frequencies of the image, where we place a pupil code, $a(x, y)$. P3 contains the image plane blurred by the aperture function. We place a diffraction grating at this plane to disperse light into different wavelengths. P4 contains the resultant spectrum and P5 is a flipped copy of P3. In a camera configuration, the pupil code consists of an open aperture and leads to sharp image but blurred spectrum. In spectrometer configuration, the pupil code is a slit and leads to sharp spectrum but blurred image. Our paper formalizes the role played by pupil code for spatial and spectral resolutions.

Download Full Size | PPT Slide | PDF

1.2 Main result

We show that the pupil code $a(x, y)$ introduces a spectral and spatial blur, $h_\lambda (\lambda )$ and $h_x(x)$ respectively with standard deviations $\sigma _\lambda$ and $\sigma _x$ (detailed expression in Eqs. (7) and (8)). Our main contribution is in the form of a lower bound on the space-spectrum bandwidth product that relates the spectral resolution at which light can be programmed and the spatial resolution of captured image. This is encapsulated in the following theorem.

Theorem 1 For the spectrally-coded imaging architecture shown in Fig. 1, the product of spatial and spectral standard deviations $\sigma _x$ and $\sigma _\lambda$, respectively, is bounded as

$$\sigma_x \sigma_\lambda \ge \frac{\lambda}{4\pi \nu_0},$$
where $\nu _0$ is the density of slits in the diffraction grating.

This result was first explored in [7] and [9] where the authors demonstrated that the spatial and spectral resolutions were related to the choice of pupil code. Our paper builds on their results by providing a concise expression for the tradeoff. We prove that a Gaussian-shaped pupil code achieves the lower bound and leads to most compact spatial blur for a targeted spectral blur.

1.3 Implications

The space-spectrum bandwidth product introduces an uncertainty in spectrally-programmable cameras, stating that one cannot arbitrarily program spectrum at high resolution without loss in spatial resolution. We demonstrate the impact of uncertainty by building a spectrally-programmable camera and showing that blocking one of two closely-spaced narrowband sources cannot be done without severe loss in spectral resolution. We also show that for narrowband filtering, the spatial blur is affected by the pupil code as well as the shape of the narrowband filter, and that a slit, a commonly used narrowband filter shape leads to a spectrally-varying spatial blur. Instead, using a Gaussian-shaped narrowband filter achieves spectrally-independent spatial blur, thereby being the optimal candidate for spectral programming.

Hyperspectral imagers. Apart from spectral programming, several hyperspectral imaging architectures [810] rely on obtaining spectrally-programmed images. Our findings impact such setups, as a key requirement of such setups is to capture high resolution images without sacrificing spectral resolution. Hence, the space-spectrum bandwidth product can serve as a design guide to carefully choose the pupil code to obtain desired spatial and spectral resolutions.

Spatially-coded cameras. We note that the analysis in the paper is targeted specifically at spectrally-programmable cameras and does not apply to many hyperspectral cameras where there is no pupil plane coding. Cameras such as the pushbroom camera and the coded aperture snapshot spectral imager (CASSI) [11,12] which scan the full HSI only perform spatial coding and, and as such, are not affected by this result. Since such systems code space and then measure its spectrally-sheared image, the spatial code only affects the spectral resolution and not the spatial resolution.

2. Prior work

We start our discussion by talking about capturing images with arbitrary spectral filters and then briefly state its applications. We then state the fundamental tradeoffs based on system parameters.

Measurement model. Consider a scene’s hyperspectral image (HSI) represented by $H(x, y, \lambda )$, where $(x, y)$ represent spatial coordinates and $\lambda$ represents wavelength. Our goal is to optically obtain a spectrally-programmed image. Specifically, given a spectral filter $f(\lambda )$, our aim is to implement a camera that captures the following grayscale image,

$$I(x, y) = \int_{\lambda} H(x, y, \lambda) f(\lambda) d\lambda.$$

Applications of spectral programming. The ability to arbitrarily program spectrum enables a wide gamut of applications. This includes adaptive color displays [6], programmatically blocking illuminants [7], and detecting materials [4,5]. The key advantage in all these applications is to not measure the complete HSI, but only the desired spectrally-programmed images; this leads to fewer measurements at higher signal to noise ratio (SNR). Such a system can also be used for compressively sensing the complete HSI [8,9] which relies on capturing projection of a scene’s HSI on random or designed spectral filters.

Spectrally-programmable camera architecture. Spectral programming is a technique that is often used in imaging applications, such as Bayer filters for RGB cameras or narrowband spectral filters for fluorescence microscopy [3]. Static filters offer arbitrarily high spectral resolution, but are not tailored for applications that require changing filters rapidly; while this can be achieved with filter wheels, the speed of such devices is constrained by the speed at which the filters can be changed. Electronically tunable filters, in part can be achieved by using a tunable filter [13] where liquid crystal (LC) cells are used to obtain a combination of narrowband spectral filters. LC filters however are typically slow as they require large settling times.

The most practical way of implementing programmable spectral filters that can be changed electronically and at high speeds is to rely on the setup shown in Fig 1. Here, a dispersion element such as a grating or prism is used to create the so-called rainbow plane [6] where each point corresponds to intensity of a single wavelength of the whole scene. By placing a spatial modulator (SLM) on this plane, one can achieve arbitrary spectral programming. This approach is similar to replacing a sensor in a spectrometer with an SLM, and has been the defacto way of spectral programming in some of the past works [7,8].

The SLM-based approach for spectral programming has certain advantages. Since SLMs are fast, one can achieve high frame rates (often in excess of 60fps), which is crucial for imaging dynamic scenes, and in applications that require rapidly switching spectral filters. Two, the system is potentially capable of high spectral resolution without sacrificing capture time. However, as we will see next, a high spectral resolution leads to a severe loss in spatial resolution. The focus of this paper is on the fundamental trade-off of spectral and spatial resolutions.

Time-frequency bandwidth product. Our main result is based on the time-frequency bandwidth product [1416], which we state here for completeness. Let $x(t)$ be a centered time-domain signal, and let $X(\nu )$ be its (centered) continuous-time Fourier transform. We define the spread of time-domain and frequency-domain signals as,

$$\sigma_t = \frac{\sqrt{\int_t t^2 |x(t)|^2 dt}}{\sqrt{\int_t |x(t)|^2 dt}} \quad \textrm{and} \quad \sigma_\nu = \frac{\sqrt{\int_\nu \nu^2 |X(\nu)|^2 d\nu}}{\sqrt{\int_\nu |X(\nu)|^2 d\nu}}.$$
Then the uncertainty theorem states that,
$$\sigma_t \sigma_\nu \ge \frac{1}{4\pi}.$$
As a consequence, one cannot achieve simultaneous arbitrarily precise localization of time and frequency. The time-frequency bandwidth product finds application in various fields of signal processing, including optical systems [17,18]. Our result is its translation to spatial and spectral signals which arises Fourier transform property of a thin lens [19].

Space-spectrum resolution tradeoff. In order to understand the impact of the pupil code shape on spatial and spectral resolutions, let us consider the design of a spectrometer, which consists of a narrow opening, a dispersive element, and a sensing element. The spectral resolution of the measurements is a function of width of the opening slit; a narrower opening leads to high resolution, while a broad slit leads to blurred spectrum. Similarly, a programmable camera would also necessitate a narrow slit to ensure that spectrum can be modulated at high resolution. However, such a narrow slit leads to a severe loss in spatial resolution, since imaging at high resolution requires a large and open aperture. In [6], it is noted that a large slit leads to loss of spectral resolution, but they do not mention what happens to the spatial resolution. The authors in [7] identified this tradeoff and stated an approximate relationship between spectral and spatial tradeoff for fully open aperture and demonstrated that high spectral resolution lead to blurry images. We formalize the result and show that such a tradeoff applies to any pupil code shape and can be concisely stated as a space-spectrum bandwidth product. In the upcoming sections, we will formalize the spatial and spectral resolutions that result from the choice of a pupil code.

3. Fundamental limits of spatial/spectral resolution

We now derive a concise lower bound on product of spatial and spectral spreads due to a spectrally-programmable camera.

Spectral and spatial blurs. Let us revisit the optical setup in Fig. 1, where we placed a pupil code $a(x)$ in plane P2 and obtained the rainbow plan on P4 and image on P5. We wish to study the effect of $a(x)$ on the blur it introduces in spectral and spatial measurements. We present the expressions for spatial and spectral blurs here and refer the interested readers to Appendix A. for a detailed derivation. For brevity, we show the blur along $x$-axis alone, as there is no spectral dispersion along $y$-axis. Let $a(x)$ be the shape of the aperture function and let $A(u)$ be its Fourier transform. Without loss of generality, we assume that $a(x)$ and $A(u)$ are both centered such that

$$\int x |a(x)|^2 dx = 0 = \int u |A(u)|^2 du.$$
Then the spectral and spatial blur functions,
$$h_\lambda (\lambda) = |a(-\lambda f \nu_0)|^2, \qquad h_x (x) = \left|A\left(-\frac{x}{\lambda f}\right)\right|^2.$$
We observe that the blur in space and spectrum are not independent; specifically, they form a Fourier-transform pair, with appropriate scaling. Our goal is to show that this interdependence between spatial and spectral blur has a very specific structure and their product can be lower bounded – implying that we cannot arbitrarily resolve in both domains.

3.1 The space-spectrum uncertainty principle

Our main result, stated in Theorem 1, suggests that the spatial and standard deviations are related by the inequality, $\sigma _x \sigma _\lambda \ge \frac {\lambda }{4\pi \nu _0}$. We now outline the proof of our theorem.

Proof. The spectral and spatial standard deviations are,

$$ \sigma_\lambda = \sqrt{\frac{\int_\lambda \lambda^2 h_\lambda (\lambda) d\lambda}{\int_\lambda h_\lambda(\lambda) d\lambda}} = \sqrt{\frac{\int_\lambda \lambda^2 |a(-\lambda f \nu_0)|^2 d\lambda}{\int_\lambda |a(-\lambda f \nu_0)|^2 d\lambda}}$$
$$ \sigma_x = \sqrt{\frac{\int_x x^2 h_x(x) dx}{\int_x h_x(x) dx}} = \sqrt{\frac{\int_x x^2 \left| A\left(-\frac{x}{\lambda f}\right)\right|^2 dx}{\int_x \left| A\left(-\frac{x}{\lambda f}\right)\right|^2 dx}}, $$
which are similar to time and frequency spreads defined in Eq. (3) with appropriate scaling. Given that $\sigma _t$ is the spread of $x(t)$, the spread of a scaled function $\widehat {x}(t) = x(st)$ is $\widehat {\sigma }_t = \sigma _t / s$. From Eq. (4) and substituting $t = \lambda f \nu _0$ and $\nu = \frac {x}{f \lambda }$,
$$\begin{aligned} \left(\frac{1}{f^2 \lambda^2}\right) \sigma^2_x (f^2 \nu_0^2)\sigma^2_\lambda &\ge \frac{1}{16\pi^2} \implies \sigma^2_x \sigma^2_\lambda \ge \frac{\lambda^2}{16\pi^2\nu_0^2}\\ &\boxed{\sigma_x \sigma_\lambda \ge \frac{\lambda}{4\pi \nu_0} } \end{aligned}$$

Implication. We make some observations about the uncertainty principle here.

  • Invariance to scaling. The bandwidth product does not change even if the aperture is stretched or squeezed. If the aperture $a(x)$ is replaced by $a(sx)$, then spectral blur changes to $h_\lambda (\lambda ) = |a(-s\lambda f \nu _0)|^2$ and the spatial blur changes to $\left |A\left (-\frac {x}{s\lambda f}\right )\right |^2$. This changes the spectral and spatial variances to $s^2 \sigma ^2_\lambda$ and $\sigma ^2_x/s^2$, thereby keeping the product a constant.
  • Invariance to power of lenses. The bandwidth product is independent of focal length of the system, implying that one cannot expect any increase in product of standard deviations by changing the lenses.
  • Dependence on groove density. The bandwidth product inversely depends on the groove density $\nu _0$. In theory, one can achieve arbitrarily low space-bandwidth product by having high groove density, but the limiting factor becomes the aperture size of lenses.
  • Dependence on wavelength. The bandwidth product is directly proportional to wavelength. This is expected, as the limiting case of our statement, where $\sigma _\lambda$ is several hundreds of nanometers is just a normal grayscale imager, and in that case, the expression looks very similar to Abbe’s diffraction limit [20]. However, one may make the expression independent of wavelength by using lower bound of the spectral range,
    $$\boxed{\sigma_x \sigma_\lambda \ge \frac{\lambda_\textrm{min}}{4\pi \nu_0}}$$

Achievability of lower bound. As in the case of time-frequency uncertainty, there exists a pupil code function that has its space-spectrum bandwidth product equal to $\frac {\lambda }{4\pi \nu _0}$. This is achieved by the family of Gaussian windows:

$$a(x, y) = \textrm{exp}\left\{-\frac{x^2}{2\sigma^2}\right\}.$$
The spectral and spatial blur are then given by,
$$\widetilde{f}(\lambda) = \textrm{exp}\left\{-\frac{\lambda^2f^2\nu_0^2}{\sigma^2}\right\},\qquad \widetilde{g}(x) = \textrm{exp}\left\{-\frac{4\pi^2\sigma^2x^2}{\lambda^2f^2}\right\}.$$
Figure 2 shows the simulated “uncertainty" box at $500$ nm of Gaussian windows of various widths. We simulated a system comprising of $100$ mm lenses and a diffraction grating with a groove density of $300$ grooves/mm. Evidently, as we squeeze along one axis, the other axis stretches with the product of widths being a constant at $145.$9 nm$\cdot \mu$m. Next, we validate our findings with an optical setup that implements the schematic in Fig. 1 and capture scenes with various aperture shapes.

 figure: Fig. 2.

Fig. 2. Simulated $\mathbf {x-\lambda }$ blur for Gaussian window. The four figures illustrate the spatio-spectral blur for various window sizes. The blur kernel was computed for $\lambda =500$ nm, $f=100$ mm and a groove density of $300$ grooves/mm. There is a visible trade off between the two resolutions. The appropriate window size depends on the application; a camera with low spectral resolution requirement can use a $\sigma =500 \mu$m window, while one with stringent spatial resolution requirements may use a $\sigma =1000 \mu$m window.

Download Full Size | PPT Slide | PDF

4. Experiments

Armed with our theoretical insights, we next verify the results with some real experiments.

Optical setup. We built an optical setup shown in Fig. 3 with relevant components marked. The setup is a minor modification of the schematic shown in Fig. 1. We placed a spatial light modulator (SLM) on plane P2 which enabled display of various coded apertures, and a diffraction grating in plane P3. The spectral measurement camera is on plane P4. Instead of placing spatial camera on P5, we place it on P3 (using beamsplitter BS1). Since we do not code the rainbow plane P4, image on P3 and P5 are equivalent. Focal length of all our lenses was $75$ mm and the diffraction grating had $300$ grooves/mm. List of components can be found in appendix B.

Visualization of spectral and spatial resolutions. To illustrate our hypothesis, we placed a USAF resolution chart on the image plane P1. The scene was illuminated with a cool white compact fluorescent lamp (CFL) which is comprised of several narrow peaks. This setup enabled us to simultaneously visualize sharp spectrum as well as sharp spatial features. Figure 4 shows images and spectra for some representative cases. Each row shows results for a specific coded aperture, whereas each column shows results for a fixed spectral resolution. The trend of decreasing spectral resolution with increasing spatial resolution is clearly visible. Further, a Gaussian aperture is superior to slit in terms of greater spatial resolution for the same spectral resolution, which agrees with our theoretical findings.

 figure: Fig. 3.

Fig. 3. Schematic and image of our lab prototype. We displayed various patterns on SLM to form coded apertures to evaluate spatial and spectral resolutions. The spectral camera was tilted to capture the first order of diffraction from the grating.

Download Full Size | PPT Slide | PDF

 figure: Fig. 4.

Fig. 4. Visualization of spectral and spatial resolutions. We illuminated a USAF resolution target with a CFL lamp. We varied aperture types and widths to get spatial and spectral measurements. Each row shows image and spectrum for a specific aperture, and each column shows image and spectrum for a fixed spectral standard deviation. The results clearly illustrate the tradeoff between spatial and spectral resolution.

Download Full Size | PPT Slide | PDF

Quantitative verification. We illuminated a pinhole with a spectrally-narrowband light source with a central wavelength of 670 nm and an FWHM of 3 nm. We then captured both spectrum of the light source and image of the pinhole, which we then used for computing the corresponding standard deviations. Figure 5 compares reciprocal of spatial resolution against spectral resolution. The two plots show a straight line, thereby verifying that the product of spatial and spectral resolutions is a constant. We also observe that the line for Gaussian aperture is very close to the theoretically optimal line, thereby confirming that the lower bound is tight, even in practice.

 figure: Fig. 5.

Fig. 5. Quantitative measurement of resolutions. We captured spatial and spectral measurements using the setup in Fig. 3. We illuminated a pinhole with a narrowband light source at 670 nm and swept across various sizes of (a) Gaussian and (b) slit apertures. We plot the reciprocal of spatial standard deviation plotted against spectral standard deviation, clearly showing a straight line.

Download Full Size | PPT Slide | PDF

 figure: Fig. 6.

Fig. 6. Schematic and image of our prototype for spectral programming. We illuminated a USAF target with 520 nm and 532 nm lasers. We then blocked 532 nm with a spatial mask on the rainbow plane. Images were captured with various slit widths to show the effect of space-spectrum uncertainty.

Download Full Size | PPT Slide | PDF

4.1 Spectral programming

Next, we discuss the impact of our findings for various scenarios of spectral programming.

Effect of edge-pass filter. The tradeoff between spectral and spatial resolution affects how well the spectrum can be coded. To test this, we illuminate two closely spaced spatial points in a scene with two narrowband light sources (520 nm and 532 nm). We then attempt to block the 520 nm laser with various coded apertures and observe the spatial image. Figure 6 shows the schematic and our lab prototype for spectral programming, which is similar to the schematic shown in Fig. 1 with a spatial mask placed in plane P4. The results are shown in Fig. 7. With a broad aperture, it is not possible to effectively block one of the two lasers, shown in fourth column. A narrow aperture can lead to effective blocking (compare first and last columns) but with loss in resolution.

 figure: Fig. 7.

Fig. 7. Spectral programming with narrowband sources. A consequence of space-spectrum bandwidth product is the incapability of spectral programming at high resolution. In this example, we show how blocking one of the two closely spaced narrowband lasers and only be done with severe loss in resolution.

Download Full Size | PPT Slide | PDF

Effect of the shape of a narrowband filter. We now show that a slit has unintended implication, when used as a narrow band filter. In order to perform narrowband spectral programming, an intuitive choice is to place a narrow slit on the rainbow plane P4 and a camera on plane P5. This results in a spectra that looks similar to the example in Fig. 8(a). While such a mask works well for the target wavelength, the spatial images corresponding to adjacent wavelengths have severe loss in spatial resolution.

 figure: Fig. 8.

Fig. 8. Effect of narrowband spectral programming on spatial resolution. Spatial resolution is affected by the pupil code as well as the spatial mask used for creating a narrowband spectral filter. We illuminated a pinhole with a 520 nm laser and then swept various spatial masks to filter wavelengths around 520 nm. For configurations where either the pupil code or the spatial mask was a slit, the spatial resolution got worse with increasing gap between desired and laser wavelength. In contrast, a configuration with both masks being Gaussian resulted in a wavelength-independent spatial blur.

Download Full Size | PPT Slide | PDF

To understand the effect of a narrowband filter, consider a scene illuminated by a monochromatic light source of wavelength $\lambda _1$. The resulting field on rainbow plane,

$$i_4(x) = a\left(x - \lambda_1f\nu_0\right).$$
Now let a spatial mask $\widehat {a}(x)$ centered around $\lambda _2$ be placed on the rainbow plane. Then the output just after the mask,
$$\widetilde{i}_4(x) = a\left(x - \lambda_1f\nu_0\right)\widehat{a}\left(x - \lambda_2f\nu_0\right).$$
If both $a(x)$ and $\widehat {a}(x)$ are slits of width $W$, then the effective width is $W - |\lambda _1-\lambda _2|$, which decreases with increasing gap between the two wavelengths. This is illustrated by the plots of PSFs in Fig. 8(c). We utilized the setup in Fig. 6, where we illuminated a pinhole with a 520 nm laser. We placed a spatial mask on a horizontal translation stage to block various adjacent wavelengths. We then measured the image of a pinhole and fit an appropriate curve to the measured PSF. Evidently, the PSF has a larger spread as the gap between target wavelength and central wavelength of filter increases. This is true even if the pupil code were Gaussian mask and the filter were a slit, as shown in Fig. 8(d). Ideally we require the effective width to be independent of $\lambda _1$ and $\lambda _2$. This is achieved if both pupil plane and the filter have a Gaussian shape. In such a case the field,
$$\begin{aligned}\widetilde{i}_4(x) &= \textrm{exp}\left\{-(x - \lambda_1f\nu_0)^2/\sigma^2\right\}\textrm{exp}\left\{-(x - \lambda_2f\nu_0)^2/\sigma^2\right\}\\ &= \underbrace{\textrm{exp}\left\{-(\lambda_1-\lambda_2)^2f^2\nu_0^2/\sigma^2\right\}}_{\textrm{amplitude}}\underbrace{\textrm{exp}\left\{-2(x - (\lambda_1+\lambda_2)f\nu_0/2)^2/\sigma^2\right\}}_{\textrm{aperture shape}}. \end{aligned}$$
The output field has a spread that is independent of $\lambda _1, \lambda _2$ which results in a wavelength-independent PSF. This is illustrated by the plot of PSFs in 8(e) with a gaussian aperture as well as filter shape. Figure 8(f) compares PSF spread in terms of spatial standard deviation for various positions of filter, clearly illustrating the wavelength-indepedent blur arising due to Gaussian-shaped pupil-code and filter.

5. Discussions and conclusion

We formalized the tradeoff between spectral and spatial resolution associated with a spectrally-programmable camera of the type shown in Fig. 1 and stated the space-spectrum uncertainty principle. We showed through theory, simulations and real experiments that one can finely resolve space or spectrum, but not both. Our analysis then showed that a Gaussian-shaped aperture achieves the theoretical lower bound, and that a Gaussian-shaped narrowband filter introduces a wavelength-independent spatial blur. We believe our findings impacts scientific imaging at large by providing insights and design guidelines for settings which rely on spectral programming.

Application to fundamental limits of hyperspectral imaging. We note that our analysis does not limit the spatial and spectral resolution of hyperspectral cameras. In case of cameras which utilize tunable filters, the spatial resolution is a function of the grayscale camera, while the spectral resolution is independently dictated by the tunable filter. For spatially-coded cameras such as pushbroom and CASSI, the sensing process does not rely on pupil coding, which does not produce diffractive blur. To understand this, consider a spatially coded camera that has only one opening in the spatial plane, $\delta (x-x_0, y-y_0)$. The measurement on the camera sensor after propagating through a dispersive element is,

$$I_\textrm{cam}(x, y) = \int_\lambda \delta(x-x_0 -\Delta (\lambda), y_0),$$
where $\Delta (\lambda )$ is a wavelength-dependent shift. We observe that the image is simply a spectrally-smeared version of the spatial point, with no extra blur (other than the one caused by aberrations due to optics). In a sense, if the desired application requires a full scan of the scene’s HSI, then it is possible to achieve high spatial and spectral resolution. However, our paper targets spectrally-programmable cameras. As we saw in the prior work section, such cameras are indispensable to efficiently sense [9] and infer [4,7] HSIs. A practical and fast implementation of spectrally-programmable cameras requires pupil coding, which limits the simultaneously achievable spatial and spectral resolutions.

Appendix A Spatio-spectral blur due to pupil code

We note that all our analysis is for spatially-incoherent light; any phase component is hence irrelevant. We rely on Fourier transform property of a thin lens [19] as well as the derivation in [9]. Assume that the complex field distribution on plane P1 is $i_1(x, y, \lambda )$. Then the field distribution on P2 that is $2f$ away is given by the scaled Fourier transform relationship, $i_2(x_2, y_2, \lambda ) = \frac {1}{j\lambda f}I_1\left (\frac {x_2}{\lambda f}, \frac {y_2}{\lambda f}, \lambda \right )$, where $I_1(u, v)$ is the Fourier transform of $i_1(x, y)$. Propagating the signal through the optical setup simply requires us to perform such operations iteratively.

Consider a single spatial point on P1 of the form $i_1(x_1, y_1, \lambda ) = s(x_0, y_0, \lambda )\delta (x_1 - x_0, y_1 - y_0)$, where $s(x_0, y_0, \lambda )$ is the complex amplitude of the point as a function of wavelength. Any arbitrary image can then be treated as infinite such point sources. The amplitude distribution on plane P2 is the scaled Fourier transform of amplitude on plane P1 and is given by,

$$i_2(x_2, y_2, \lambda) = \frac{1}{j\lambda f}s(x_0, y_0, \lambda) \textrm{exp}\left\{-\frac{2\pi j}{\lambda f}(x_0 x_2 + y_0 y_2)\right\}$$
Let $a(x, y)$ be the complex amplitude of the pupil code placed on P2. Then the intensity just after the aperture is given by,
$$\widehat{i}_2(x_2, y_2, \lambda) = \frac{1}{j\lambda f}s(x_0, y_0, \lambda) \textrm{exp}\left\{-\frac{2\pi j}{\lambda f}(x_0 x_2 + y_0 y_2)\right\}\times a(x_2, y_2)$$
With a similar derivation, we can show that the field distribution on P3 just before the diffraction grating is,
$$i_3(x_3, y_3, \lambda) = \frac{1}{j\lambda f}\widehat{I}_2\left(\frac{x_3}{\lambda f}, \frac{y_3}{\lambda f}\right) =\frac{1}{(j\lambda f)^2}s(x_0, y_0, \lambda) A\left(\frac{x_3 + x_0}{\lambda f}, \frac{y_3 + y_0}{\lambda f}\right),$$
where $A(u, v)$ is the Fourier transform of $a(x, y)$. For simplicity of analysis, we consider the diffraction grating to be a series of narrow band slits, modeled as an impulse train along $x$-axis as,
$$d(x, y) = \sum_{k=-\infty}^{\infty} \delta\left(x - \frac{k}{\nu_0}\right),$$
where $\nu _0$ is the groove density. Then the propagated field just after the grating is given by,
$$\begin{aligned}\widehat{i}_3(x_3, y_3, \lambda) &= i_3(x_3, y_3, \lambda) d(x_3, y_3)\\ &= \frac{1}{(j\lambda f)^2}s(x_0, y_0, \lambda) A\left(\frac{x_3 + x_0}{\lambda f}, \frac{y_3 + y_0}{\lambda f}\right) \times \sum_{k=-\infty}^{\infty} \delta\left(x_3 - \frac{k}{\nu_0}\right) \end{aligned}$$
Using Fourier transform property of lens again, the field on P4 is,
$$\begin{aligned}i_4(x_4, y_4, \lambda) &= \frac{1}{j\lambda f}\widehat{I}_3\left(\frac{x_4}{\lambda f}, \frac{y_4}{\lambda f}\right) = \frac{1}{j\lambda f}\left(\frac{1}{j\lambda f}\right)^2I_3\left(\frac{x_4}{\lambda f}, \frac{y_4}{\lambda f}\right) \ast D\left(\frac{x_4}{\lambda f}, \frac{y_4}{\lambda f}\right)\\ &= \frac{1}{j\lambda f}\left(\frac{1}{j\lambda f}\right)^2I_3\left(\frac{x_4}{\lambda f}, \frac{y_4}{\lambda f}\right) \ast \left(\delta\left(\frac{y_4}{\lambda f}\right) \sum_{k=-\infty}^{\infty} \delta\left(\frac{x_4}{\lambda f} - k\nu_0\right)\right)\\ &= -\frac{1}{j\lambda f}s(x_0, y_0, \lambda) \sum_{k=-\infty}^{\infty} a(-(x_4 - k\nu_0 \lambda f), -y_4)\textrm{exp}\left\{j\frac{2\pi}{\lambda f}(x_0(x_4 - k\lambda f \nu_0) + y_0y_4)\right\}, \end{aligned}$$
leading to an equation which shows multiple, spectrally-dispersed copies of the aperture $a(x, y)$ along the $x$-axis. Our optical setup is designed to propagate only the first order and hence we retain the $k=1$ copy, giving us,
$$i_4(x_4, y_4, \lambda) = -\frac{1}{j\lambda f}s(x_0, y_0, \lambda) \underbrace{a(-(x_4 - \nu_0 \lambda f), -y_4)}_\textrm{spectrally-shifted a(x, y)}\textrm{exp}\left\{j\frac{2\pi}{\lambda f}(x_0(x_4 - \lambda f \nu_0) + y_0y_4)\right\}.$$
Finally, propagating the signal one more lens away, we get,
$$i_5(x, y, \lambda ) = \frac{1}{(j\lambda f)^2}\textrm{exp}\left\{-j2\pi x_5 \nu_0\right\} s(x_0, y_0, \lambda) A\left(-\frac{x_5 + x_0}{\lambda f}, -\frac{y_5 + y_0}{\lambda f}\right).$$

Intensity measurements. Consider cameras placed on planes P4 and P5 with a spectral response of $c(\lambda )$. The intensity measurement on P4,

$$\begin{aligned}M_4(x_4, y_4) &= \int_{\lambda} |i_4(x_4, y_4, \lambda)|^2 c(\lambda) d\lambda\\ &= \int_{\lambda} \frac{1}{\lambda^2 f^2} |s(x_0, y_0, \lambda)|^2 |a(-(x_4 - \nu_0 \lambda f), -y_4)|^2 c(\lambda) d\lambda\\ &= \widehat{S}\left(x_0, y_0, \frac{x_4}{f \nu_0}\right) \ast |a(-x_4, -y_4)|^2, \end{aligned}$$
where $\widehat {S}\left (x_0, y_0, \lambda \right ) = \frac {1}{\lambda ^2 f^2} |s(x_0, y_0, \lambda )|^2 c(\lambda )$ is the measured intensity of the scene point. Extending to all points $(x_0, y_0)$, we get,
$$M_4(x_4, y_4) = \int_{x_0} \int_{y_0} \widehat{S}\left(x_0, y_0, \frac{x_4}{f \nu_0}\right) \ast |a(-x_4, -y_4)|^2 = S\left(\frac{x_4}{\lambda f}\right) \ast |a(-x_4, -y_4)|^2 .$$
Here, $S(\lambda )$ is an integral of spectrum of all spatial points. Equation (26) shows that the aperture function $a(x, y)$ results in spectral blur at every scene point. Similarly, the spatial image on P5,
$$\begin{aligned}M_5(x, y) &= \int_{\lambda} |i_5(x_5, y_5, \lambda)|^2 c(\lambda) d\lambda\\ &= \frac{1}{\lambda^4 f^4} \int_{\lambda} |s(x_0, y_0, \lambda)|^2 \left|A\left(-\frac{x_5+x_0}{\lambda f}, -\frac{x_5+x_0}{\lambda f}\right)\right|^2c(\lambda)d\lambda. \end{aligned}$$
Computing intensity for all $(x_0, y_0)$ gives us,
$$\begin{aligned} M_5(x, y) &= \int_{x_0}\int_{y_0} \frac{1}{\lambda^4 f^4} \int_{\lambda} |s(x_0, y_0, \lambda)|^2 \left|A\left(-\frac{x_5+x_0}{\lambda f}, -\frac{x_5+x_0}{\lambda f}\right)\right|^2c(\lambda)d\lambda\\ &= \frac{1}{\lambda^4 f^4} \int_{\lambda} \underbrace{|s(x_5, y_5, \lambda)|^2 \ast \left|A\left(-\frac{x_5}{\lambda f}, -\frac{y_5}{\lambda f}\right)\right|^2}_{\textrm{Spatial blur}}c(\lambda)d\lambda.\end{aligned}$$
Equation (28) shows that the pupil code $a(x, y)$ introduces a spatial blur equal to a scaled version of its power spectral density (PSD), $|A(u, v)|^2$. For monochromatic light source, (28) is simply a convolution of scene’s image with a scaled PSD of $a(x, y)$; for polychromatic sources, this expression has a spectrally-dependent PSF, which does not follow a convolution model. To make analysis simple, we assume that the shape of the PSF is approximately the same over a small range of wavelengths. Then the resultant expression for the spatial image is,
$$\begin{aligned} M_5(x_5, y_5) &= \left(\frac{1}{\lambda^4 f^4}|s(x_5, y_5, \lambda)|^2\right) \ast \left|A\left(-\frac{x_5}{\lambda_c f}, -\frac{y_5}{\lambda_c f}\right)\right|^2\\ &= I(x_0, y_0) \ast \left|A\left(-\frac{x_5}{\lambda_c f}, -\frac{y_5}{\lambda_c f}\right)\right|^2, \end{aligned}$$
where $I(x_0, y_0)$ is a scaled grayscale image of the scene, and $\lambda _c$ is a chosen, central wavelength of the spectral range.

Spectral and spatial blurs. For brevity and ease of understanding, we drop the $y$ axis as it does not affect the spectral blur. From (26) and (29), we get the following expressions for spectral and spatial blurs,

$$h_\lambda (\lambda) = |a(-\lambda f \nu_0)|^2, \qquad h_x (x) = \left|A\left(-\frac{x}{\lambda f}\right)\right|^2.$$

 figure: Fig. 9.

Fig. 9. Simulations on common aperture shapes. (a) compares spectral and spatial standard deviations whereas (b) shows spectral and spatial MTF at $30\%$ contrast. Gaussian codes achieve theoretical limit when resolution metric is standard deviation of window.

Download Full Size | PPT Slide | PDF

A.1 Verification using simulations

We provide a validation of our theory with simulations. We specifically compared a box aperture that simulates a slit or fully open aperture, and a Gaussian aperture. For the purpose of exposition, we used $f=75$ mm and a diffraction grating of $300$ groves/mm. Figure 9(a) shows a plot of spatial and spectral standard deviations and (b) shows a plot of spatial and spectral modulation transfer function (MTF) at $30\%$ contrast ratio. The plots show a clear trade off between the two resolutions, independent of resolution metric.

We also observe that Gaussian codes achieve the theoretical limit for standard deviation. Hence we conclude that the space-spectrum bandwidth product is a tight bound.

Appendix B List of components

Figure 10 shows list of components for the setup in Fig. 3, and Fig. 11 shows the list of components for the setup in Fig. 6.

 figure: Fig. 10.

Fig. 10. List of components for setup in Fig. 3. The figure shows the names and vendors for the marked components for simultaneously measuring spectrum and space.

Download Full Size | PPT Slide | PDF

 figure: Fig. 11.

Fig. 11. List of components for setup in Fig. 6. The figure shows the names and vendors for the marked components for spectral programming.

Download Full Size | PPT Slide | PDF

Funding

National Geospatial-Intelligence Agency's Academic Research Program (Award No. HM0476-17-1-2000); NSF CAREER grant (CCF-1652569); NSF Expeditions award (1730147); Prabhu and Poonam Goel graduate fellowship.

Disclosures

The authors declare no conflicts of interest.

References

1. E. Cloutis, “Review article hyperspectral geological remote sensing: Evaluation of analytical techniques,” Int. J. Remote Sens. 17(12), 2215–2242 (1996). [CrossRef]  

2. N. Colthup, Introduction to Infrared and Raman Spectroscopy (Elsevier, 2012).

3. J. W. Lichtman and J.-A. Conchello, “Fluorescence microscopy,” Nat. Methods 2(12), 910–919 (2005). [CrossRef]  

4. V. Saragadam and A. C. Sankaranarayanan, “Programmable spectrometry–per-pixel classification of materials using learned spectral filters,” arXiv preprint arXiv:1905.04815 (2019).

5. T. Zhi, B. R. Pires, M. Hebert, and S. G. Narasimhan, “Multispectral imaging for fine-grained recognition of powders on complex backgrounds,” in IEEE Intl. Conf. Comp. Vision and Pattern Recognition (CVPR), (2019).

6. A. Mohan, R. Raskar, and J. Tumblin, “Agile spectrum imaging: Programmable wavelength modulation for cameras and projectors,” in Comp. Graphics Forum, (2008).

7. S. P. Love and D. L. Graff, “Full-frame programmable spectral filters based on micromirror arrays,” J. Micro/Nanolith. MEMS MOEMS 13(1), 011108 (2014). [CrossRef]  

8. X. Lin, G. Wetzstein, Y. Liu, and Q. Dai, “Dual-coded compressive hyperspectral imaging,” Opt. Lett. 39(7), 2044–2047 (2014). [CrossRef]  

9. V. Saragadam and A. C. Sankaranarayanan, “KRISM—Krylov subspace-based optical computing of hyperspectral images,” ACM Trans. Graph. 38(5), 1–14 (2019). [CrossRef]  

10. Y. August, C. Vachman, Y. Rivenson, and A. Stern, “Compressive hyperspectral imaging by random separable projections in both the spatial and the spectral domains,” Appl. Opt. 52(10), D46–D54 (2013). [CrossRef]  

11. A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 47(10), B44–B51 (2008). [CrossRef]  

12. D. Kittle, K. Choi, A. Wagadarikar, and D. J. Brady, “Multiframe image estimation for coded aperture snapshot spectral imagers,” Appl. Opt. 49(36), 6824–6833 (2010). [CrossRef]  

13. W. contributors, “Liquid crystal tunable filter,” https://en.wikipedia.org/wiki/Liquid_crystal_tunable_filter (2019). [Online; accessed: 2019-07-18].

14. A. Grami, “Chapter 3 - signals, systems, and spectral analysis,” in Introduction to Digital Communications, (Academic Press, 2016), pp. 41–150.

15. M. G. Cowling and J. F. Price, “Bandwidth versus time concentration: the heisenberg–pauli–weyl inequality,” SIAM J. Math. Anal. 15(1), 151–165 (1984). [CrossRef]  

16. R. Pierri, A. Liseno, R. Solimene, and F. Tartaglione, “In-depth resolution from multifrequency born fields scattered by a dielectric strip in the fresnel zone,” J. Opt. Soc. Am. A 19(6), 1234–1238 (2002). [CrossRef]  

17. A. W. Lohmann, R. G. Dorsch, D. Mendlovic, Z. Zalevsky, and C. Ferreira, “Space–bandwidth product of optical signals and systems,” J. Opt. Soc. Am. A 13(3), 470–473 (1996). [CrossRef]  

18. Z. Zhang and M. Levoy, “Wigner distributions and how they relate to the light field,” in IEEE Intl. Conf. Computational Photography (ICCP), (2009).

19. J. W. Goodman, Introduction to Fourier optics (Roberts and Company Publishers, 2005).

20. S. G. Lipson, H. Lipson, and D. S. Tannhauser, Optical Physics (Cambridge University Press, 1995).

References

  • View by:

  1. E. Cloutis, “Review article hyperspectral geological remote sensing: Evaluation of analytical techniques,” Int. J. Remote Sens. 17(12), 2215–2242 (1996).
    [Crossref]
  2. N. Colthup, Introduction to Infrared and Raman Spectroscopy (Elsevier, 2012).
  3. J. W. Lichtman and J.-A. Conchello, “Fluorescence microscopy,” Nat. Methods 2(12), 910–919 (2005).
    [Crossref]
  4. V. Saragadam and A. C. Sankaranarayanan, “Programmable spectrometry–per-pixel classification of materials using learned spectral filters,” arXiv preprint arXiv:1905.04815 (2019).
  5. T. Zhi, B. R. Pires, M. Hebert, and S. G. Narasimhan, “Multispectral imaging for fine-grained recognition of powders on complex backgrounds,” in IEEE Intl. Conf. Comp. Vision and Pattern Recognition (CVPR), (2019).
  6. A. Mohan, R. Raskar, and J. Tumblin, “Agile spectrum imaging: Programmable wavelength modulation for cameras and projectors,” in Comp. Graphics Forum, (2008).
  7. S. P. Love and D. L. Graff, “Full-frame programmable spectral filters based on micromirror arrays,” J. Micro/Nanolith. MEMS MOEMS 13(1), 011108 (2014).
    [Crossref]
  8. X. Lin, G. Wetzstein, Y. Liu, and Q. Dai, “Dual-coded compressive hyperspectral imaging,” Opt. Lett. 39(7), 2044–2047 (2014).
    [Crossref]
  9. V. Saragadam and A. C. Sankaranarayanan, “KRISM—Krylov subspace-based optical computing of hyperspectral images,” ACM Trans. Graph. 38(5), 1–14 (2019).
    [Crossref]
  10. Y. August, C. Vachman, Y. Rivenson, and A. Stern, “Compressive hyperspectral imaging by random separable projections in both the spatial and the spectral domains,” Appl. Opt. 52(10), D46–D54 (2013).
    [Crossref]
  11. A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 47(10), B44–B51 (2008).
    [Crossref]
  12. D. Kittle, K. Choi, A. Wagadarikar, and D. J. Brady, “Multiframe image estimation for coded aperture snapshot spectral imagers,” Appl. Opt. 49(36), 6824–6833 (2010).
    [Crossref]
  13. W. contributors, “Liquid crystal tunable filter,” https://en.wikipedia.org/wiki/Liquid_crystal_tunable_filter (2019). [Online; accessed: 2019-07-18].
  14. A. Grami, “Chapter 3 - signals, systems, and spectral analysis,” in Introduction to Digital Communications, (Academic Press, 2016), pp. 41–150.
  15. M. G. Cowling and J. F. Price, “Bandwidth versus time concentration: the heisenberg–pauli–weyl inequality,” SIAM J. Math. Anal. 15(1), 151–165 (1984).
    [Crossref]
  16. R. Pierri, A. Liseno, R. Solimene, and F. Tartaglione, “In-depth resolution from multifrequency born fields scattered by a dielectric strip in the fresnel zone,” J. Opt. Soc. Am. A 19(6), 1234–1238 (2002).
    [Crossref]
  17. A. W. Lohmann, R. G. Dorsch, D. Mendlovic, Z. Zalevsky, and C. Ferreira, “Space–bandwidth product of optical signals and systems,” J. Opt. Soc. Am. A 13(3), 470–473 (1996).
    [Crossref]
  18. Z. Zhang and M. Levoy, “Wigner distributions and how they relate to the light field,” in IEEE Intl. Conf. Computational Photography (ICCP), (2009).
  19. J. W. Goodman, Introduction to Fourier optics (Roberts and Company Publishers, 2005).
  20. S. G. Lipson, H. Lipson, and D. S. Tannhauser, Optical Physics (Cambridge University Press, 1995).

2019 (1)

V. Saragadam and A. C. Sankaranarayanan, “KRISM—Krylov subspace-based optical computing of hyperspectral images,” ACM Trans. Graph. 38(5), 1–14 (2019).
[Crossref]

2014 (2)

S. P. Love and D. L. Graff, “Full-frame programmable spectral filters based on micromirror arrays,” J. Micro/Nanolith. MEMS MOEMS 13(1), 011108 (2014).
[Crossref]

X. Lin, G. Wetzstein, Y. Liu, and Q. Dai, “Dual-coded compressive hyperspectral imaging,” Opt. Lett. 39(7), 2044–2047 (2014).
[Crossref]

2013 (1)

2010 (1)

2008 (1)

2005 (1)

J. W. Lichtman and J.-A. Conchello, “Fluorescence microscopy,” Nat. Methods 2(12), 910–919 (2005).
[Crossref]

2002 (1)

1996 (2)

A. W. Lohmann, R. G. Dorsch, D. Mendlovic, Z. Zalevsky, and C. Ferreira, “Space–bandwidth product of optical signals and systems,” J. Opt. Soc. Am. A 13(3), 470–473 (1996).
[Crossref]

E. Cloutis, “Review article hyperspectral geological remote sensing: Evaluation of analytical techniques,” Int. J. Remote Sens. 17(12), 2215–2242 (1996).
[Crossref]

1984 (1)

M. G. Cowling and J. F. Price, “Bandwidth versus time concentration: the heisenberg–pauli–weyl inequality,” SIAM J. Math. Anal. 15(1), 151–165 (1984).
[Crossref]

August, Y.

Brady, D.

Brady, D. J.

Choi, K.

Cloutis, E.

E. Cloutis, “Review article hyperspectral geological remote sensing: Evaluation of analytical techniques,” Int. J. Remote Sens. 17(12), 2215–2242 (1996).
[Crossref]

Colthup, N.

N. Colthup, Introduction to Infrared and Raman Spectroscopy (Elsevier, 2012).

Conchello, J.-A.

J. W. Lichtman and J.-A. Conchello, “Fluorescence microscopy,” Nat. Methods 2(12), 910–919 (2005).
[Crossref]

Cowling, M. G.

M. G. Cowling and J. F. Price, “Bandwidth versus time concentration: the heisenberg–pauli–weyl inequality,” SIAM J. Math. Anal. 15(1), 151–165 (1984).
[Crossref]

Dai, Q.

Dorsch, R. G.

Ferreira, C.

Goodman, J. W.

J. W. Goodman, Introduction to Fourier optics (Roberts and Company Publishers, 2005).

Graff, D. L.

S. P. Love and D. L. Graff, “Full-frame programmable spectral filters based on micromirror arrays,” J. Micro/Nanolith. MEMS MOEMS 13(1), 011108 (2014).
[Crossref]

Grami, A.

A. Grami, “Chapter 3 - signals, systems, and spectral analysis,” in Introduction to Digital Communications, (Academic Press, 2016), pp. 41–150.

Hebert, M.

T. Zhi, B. R. Pires, M. Hebert, and S. G. Narasimhan, “Multispectral imaging for fine-grained recognition of powders on complex backgrounds,” in IEEE Intl. Conf. Comp. Vision and Pattern Recognition (CVPR), (2019).

John, R.

Kittle, D.

Levoy, M.

Z. Zhang and M. Levoy, “Wigner distributions and how they relate to the light field,” in IEEE Intl. Conf. Computational Photography (ICCP), (2009).

Lichtman, J. W.

J. W. Lichtman and J.-A. Conchello, “Fluorescence microscopy,” Nat. Methods 2(12), 910–919 (2005).
[Crossref]

Lin, X.

Lipson, H.

S. G. Lipson, H. Lipson, and D. S. Tannhauser, Optical Physics (Cambridge University Press, 1995).

Lipson, S. G.

S. G. Lipson, H. Lipson, and D. S. Tannhauser, Optical Physics (Cambridge University Press, 1995).

Liseno, A.

Liu, Y.

Lohmann, A. W.

Love, S. P.

S. P. Love and D. L. Graff, “Full-frame programmable spectral filters based on micromirror arrays,” J. Micro/Nanolith. MEMS MOEMS 13(1), 011108 (2014).
[Crossref]

Mendlovic, D.

Mohan, A.

A. Mohan, R. Raskar, and J. Tumblin, “Agile spectrum imaging: Programmable wavelength modulation for cameras and projectors,” in Comp. Graphics Forum, (2008).

Narasimhan, S. G.

T. Zhi, B. R. Pires, M. Hebert, and S. G. Narasimhan, “Multispectral imaging for fine-grained recognition of powders on complex backgrounds,” in IEEE Intl. Conf. Comp. Vision and Pattern Recognition (CVPR), (2019).

Pierri, R.

Pires, B. R.

T. Zhi, B. R. Pires, M. Hebert, and S. G. Narasimhan, “Multispectral imaging for fine-grained recognition of powders on complex backgrounds,” in IEEE Intl. Conf. Comp. Vision and Pattern Recognition (CVPR), (2019).

Price, J. F.

M. G. Cowling and J. F. Price, “Bandwidth versus time concentration: the heisenberg–pauli–weyl inequality,” SIAM J. Math. Anal. 15(1), 151–165 (1984).
[Crossref]

Raskar, R.

A. Mohan, R. Raskar, and J. Tumblin, “Agile spectrum imaging: Programmable wavelength modulation for cameras and projectors,” in Comp. Graphics Forum, (2008).

Rivenson, Y.

Sankaranarayanan, A. C.

V. Saragadam and A. C. Sankaranarayanan, “KRISM—Krylov subspace-based optical computing of hyperspectral images,” ACM Trans. Graph. 38(5), 1–14 (2019).
[Crossref]

V. Saragadam and A. C. Sankaranarayanan, “Programmable spectrometry–per-pixel classification of materials using learned spectral filters,” arXiv preprint arXiv:1905.04815 (2019).

Saragadam, V.

V. Saragadam and A. C. Sankaranarayanan, “KRISM—Krylov subspace-based optical computing of hyperspectral images,” ACM Trans. Graph. 38(5), 1–14 (2019).
[Crossref]

V. Saragadam and A. C. Sankaranarayanan, “Programmable spectrometry–per-pixel classification of materials using learned spectral filters,” arXiv preprint arXiv:1905.04815 (2019).

Solimene, R.

Stern, A.

Tannhauser, D. S.

S. G. Lipson, H. Lipson, and D. S. Tannhauser, Optical Physics (Cambridge University Press, 1995).

Tartaglione, F.

Tumblin, J.

A. Mohan, R. Raskar, and J. Tumblin, “Agile spectrum imaging: Programmable wavelength modulation for cameras and projectors,” in Comp. Graphics Forum, (2008).

Vachman, C.

Wagadarikar, A.

Wetzstein, G.

Willett, R.

Zalevsky, Z.

Zhang, Z.

Z. Zhang and M. Levoy, “Wigner distributions and how they relate to the light field,” in IEEE Intl. Conf. Computational Photography (ICCP), (2009).

Zhi, T.

T. Zhi, B. R. Pires, M. Hebert, and S. G. Narasimhan, “Multispectral imaging for fine-grained recognition of powders on complex backgrounds,” in IEEE Intl. Conf. Comp. Vision and Pattern Recognition (CVPR), (2019).

ACM Trans. Graph. (1)

V. Saragadam and A. C. Sankaranarayanan, “KRISM—Krylov subspace-based optical computing of hyperspectral images,” ACM Trans. Graph. 38(5), 1–14 (2019).
[Crossref]

Appl. Opt. (3)

Int. J. Remote Sens. (1)

E. Cloutis, “Review article hyperspectral geological remote sensing: Evaluation of analytical techniques,” Int. J. Remote Sens. 17(12), 2215–2242 (1996).
[Crossref]

J. Micro/Nanolith. MEMS MOEMS (1)

S. P. Love and D. L. Graff, “Full-frame programmable spectral filters based on micromirror arrays,” J. Micro/Nanolith. MEMS MOEMS 13(1), 011108 (2014).
[Crossref]

J. Opt. Soc. Am. A (2)

Nat. Methods (1)

J. W. Lichtman and J.-A. Conchello, “Fluorescence microscopy,” Nat. Methods 2(12), 910–919 (2005).
[Crossref]

Opt. Lett. (1)

SIAM J. Math. Anal. (1)

M. G. Cowling and J. F. Price, “Bandwidth versus time concentration: the heisenberg–pauli–weyl inequality,” SIAM J. Math. Anal. 15(1), 151–165 (1984).
[Crossref]

Other (9)

N. Colthup, Introduction to Infrared and Raman Spectroscopy (Elsevier, 2012).

V. Saragadam and A. C. Sankaranarayanan, “Programmable spectrometry–per-pixel classification of materials using learned spectral filters,” arXiv preprint arXiv:1905.04815 (2019).

T. Zhi, B. R. Pires, M. Hebert, and S. G. Narasimhan, “Multispectral imaging for fine-grained recognition of powders on complex backgrounds,” in IEEE Intl. Conf. Comp. Vision and Pattern Recognition (CVPR), (2019).

A. Mohan, R. Raskar, and J. Tumblin, “Agile spectrum imaging: Programmable wavelength modulation for cameras and projectors,” in Comp. Graphics Forum, (2008).

Z. Zhang and M. Levoy, “Wigner distributions and how they relate to the light field,” in IEEE Intl. Conf. Computational Photography (ICCP), (2009).

J. W. Goodman, Introduction to Fourier optics (Roberts and Company Publishers, 2005).

S. G. Lipson, H. Lipson, and D. S. Tannhauser, Optical Physics (Cambridge University Press, 1995).

W. contributors, “Liquid crystal tunable filter,” https://en.wikipedia.org/wiki/Liquid_crystal_tunable_filter (2019). [Online; accessed: 2019-07-18].

A. Grami, “Chapter 3 - signals, systems, and spectral analysis,” in Introduction to Digital Communications, (Academic Press, 2016), pp. 41–150.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Optical setup for capturing images with spectral programming. P1 is the image plane of the objective lens, P2 contains spatial frequencies of the image, where we place a pupil code, $a(x, y)$ . P3 contains the image plane blurred by the aperture function. We place a diffraction grating at this plane to disperse light into different wavelengths. P4 contains the resultant spectrum and P5 is a flipped copy of P3. In a camera configuration, the pupil code consists of an open aperture and leads to sharp image but blurred spectrum. In spectrometer configuration, the pupil code is a slit and leads to sharp spectrum but blurred image. Our paper formalizes the role played by pupil code for spatial and spectral resolutions.
Fig. 2.
Fig. 2. Simulated $\mathbf {x-\lambda }$ blur for Gaussian window. The four figures illustrate the spatio-spectral blur for various window sizes. The blur kernel was computed for $\lambda =500$ nm, $f=100$ mm and a groove density of $300$ grooves/mm. There is a visible trade off between the two resolutions. The appropriate window size depends on the application; a camera with low spectral resolution requirement can use a $\sigma =500 \mu$ m window, while one with stringent spatial resolution requirements may use a $\sigma =1000 \mu$ m window.
Fig. 3.
Fig. 3. Schematic and image of our lab prototype. We displayed various patterns on SLM to form coded apertures to evaluate spatial and spectral resolutions. The spectral camera was tilted to capture the first order of diffraction from the grating.
Fig. 4.
Fig. 4. Visualization of spectral and spatial resolutions. We illuminated a USAF resolution target with a CFL lamp. We varied aperture types and widths to get spatial and spectral measurements. Each row shows image and spectrum for a specific aperture, and each column shows image and spectrum for a fixed spectral standard deviation. The results clearly illustrate the tradeoff between spatial and spectral resolution.
Fig. 5.
Fig. 5. Quantitative measurement of resolutions. We captured spatial and spectral measurements using the setup in Fig. 3. We illuminated a pinhole with a narrowband light source at 670 nm and swept across various sizes of (a) Gaussian and (b) slit apertures. We plot the reciprocal of spatial standard deviation plotted against spectral standard deviation, clearly showing a straight line.
Fig. 6.
Fig. 6. Schematic and image of our prototype for spectral programming. We illuminated a USAF target with 520 nm and 532 nm lasers. We then blocked 532 nm with a spatial mask on the rainbow plane. Images were captured with various slit widths to show the effect of space-spectrum uncertainty.
Fig. 7.
Fig. 7. Spectral programming with narrowband sources. A consequence of space-spectrum bandwidth product is the incapability of spectral programming at high resolution. In this example, we show how blocking one of the two closely spaced narrowband lasers and only be done with severe loss in resolution.
Fig. 8.
Fig. 8. Effect of narrowband spectral programming on spatial resolution. Spatial resolution is affected by the pupil code as well as the spatial mask used for creating a narrowband spectral filter. We illuminated a pinhole with a 520 nm laser and then swept various spatial masks to filter wavelengths around 520 nm. For configurations where either the pupil code or the spatial mask was a slit, the spatial resolution got worse with increasing gap between desired and laser wavelength. In contrast, a configuration with both masks being Gaussian resulted in a wavelength-independent spatial blur.
Fig. 9.
Fig. 9. Simulations on common aperture shapes. (a) compares spectral and spatial standard deviations whereas (b) shows spectral and spatial MTF at $30\%$ contrast. Gaussian codes achieve theoretical limit when resolution metric is standard deviation of window.
Fig. 10.
Fig. 10. List of components for setup in Fig. 3. The figure shows the names and vendors for the marked components for simultaneously measuring spectrum and space.
Fig. 11.
Fig. 11. List of components for setup in Fig. 6. The figure shows the names and vendors for the marked components for spectral programming.

Equations (30)

Equations on this page are rendered with MathJax. Learn more.

σ x σ λ λ 4 π ν 0 ,
I ( x , y ) = λ H ( x , y , λ ) f ( λ ) d λ .
σ t = t t 2 | x ( t ) | 2 d t t | x ( t ) | 2 d t and σ ν = ν ν 2 | X ( ν ) | 2 d ν ν | X ( ν ) | 2 d ν .
σ t σ ν 1 4 π .
x | a ( x ) | 2 d x = 0 = u | A ( u ) | 2 d u .
h λ ( λ ) = | a ( λ f ν 0 ) | 2 , h x ( x ) = | A ( x λ f ) | 2 .
σ λ = λ λ 2 h λ ( λ ) d λ λ h λ ( λ ) d λ = λ λ 2 | a ( λ f ν 0 ) | 2 d λ λ | a ( λ f ν 0 ) | 2 d λ
σ x = x x 2 h x ( x ) d x x h x ( x ) d x = x x 2 | A ( x λ f ) | 2 d x x | A ( x λ f ) | 2 d x ,
( 1 f 2 λ 2 ) σ x 2 ( f 2 ν 0 2 ) σ λ 2 1 16 π 2 σ x 2 σ λ 2 λ 2 16 π 2 ν 0 2 σ x σ λ λ 4 π ν 0
σ x σ λ λ min 4 π ν 0
a ( x , y ) = exp { x 2 2 σ 2 } .
f ~ ( λ ) = exp { λ 2 f 2 ν 0 2 σ 2 } , g ~ ( x ) = exp { 4 π 2 σ 2 x 2 λ 2 f 2 } .
i 4 ( x ) = a ( x λ 1 f ν 0 ) .
i ~ 4 ( x ) = a ( x λ 1 f ν 0 ) a ^ ( x λ 2 f ν 0 ) .
i ~ 4 ( x ) = exp { ( x λ 1 f ν 0 ) 2 / σ 2 } exp { ( x λ 2 f ν 0 ) 2 / σ 2 } = exp { ( λ 1 λ 2 ) 2 f 2 ν 0 2 / σ 2 } amplitude exp { 2 ( x ( λ 1 + λ 2 ) f ν 0 / 2 ) 2 / σ 2 } aperture shape .
I cam ( x , y ) = λ δ ( x x 0 Δ ( λ ) , y 0 ) ,
i 2 ( x 2 , y 2 , λ ) = 1 j λ f s ( x 0 , y 0 , λ ) exp { 2 π j λ f ( x 0 x 2 + y 0 y 2 ) }
i ^ 2 ( x 2 , y 2 , λ ) = 1 j λ f s ( x 0 , y 0 , λ ) exp { 2 π j λ f ( x 0 x 2 + y 0 y 2 ) } × a ( x 2 , y 2 )
i 3 ( x 3 , y 3 , λ ) = 1 j λ f I ^ 2 ( x 3 λ f , y 3 λ f ) = 1 ( j λ f ) 2 s ( x 0 , y 0 , λ ) A ( x 3 + x 0 λ f , y 3 + y 0 λ f ) ,
d ( x , y ) = k = δ ( x k ν 0 ) ,
i ^ 3 ( x 3 , y 3 , λ ) = i 3 ( x 3 , y 3 , λ ) d ( x 3 , y 3 ) = 1 ( j λ f ) 2 s ( x 0 , y 0 , λ ) A ( x 3 + x 0 λ f , y 3 + y 0 λ f ) × k = δ ( x 3 k ν 0 )
i 4 ( x 4 , y 4 , λ ) = 1 j λ f I ^ 3 ( x 4 λ f , y 4 λ f ) = 1 j λ f ( 1 j λ f ) 2 I 3 ( x 4 λ f , y 4 λ f ) D ( x 4 λ f , y 4 λ f ) = 1 j λ f ( 1 j λ f ) 2 I 3 ( x 4 λ f , y 4 λ f ) ( δ ( y 4 λ f ) k = δ ( x 4 λ f k ν 0 ) ) = 1 j λ f s ( x 0 , y 0 , λ ) k = a ( ( x 4 k ν 0 λ f ) , y 4 ) exp { j 2 π λ f ( x 0 ( x 4 k λ f ν 0 ) + y 0 y 4 ) } ,
i 4 ( x 4 , y 4 , λ ) = 1 j λ f s ( x 0 , y 0 , λ ) a ( ( x 4 ν 0 λ f ) , y 4 ) spectrally-shifted a(x, y) exp { j 2 π λ f ( x 0 ( x 4 λ f ν 0 ) + y 0 y 4 ) } .
i 5 ( x , y , λ ) = 1 ( j λ f ) 2 exp { j 2 π x 5 ν 0 } s ( x 0 , y 0 , λ ) A ( x 5 + x 0 λ f , y 5 + y 0 λ f ) .
M 4 ( x 4 , y 4 ) = λ | i 4 ( x 4 , y 4 , λ ) | 2 c ( λ ) d λ = λ 1 λ 2 f 2 | s ( x 0 , y 0 , λ ) | 2 | a ( ( x 4 ν 0 λ f ) , y 4 ) | 2 c ( λ ) d λ = S ^ ( x 0 , y 0 , x 4 f ν 0 ) | a ( x 4 , y 4 ) | 2 ,
M 4 ( x 4 , y 4 ) = x 0 y 0 S ^ ( x 0 , y 0 , x 4 f ν 0 ) | a ( x 4 , y 4 ) | 2 = S ( x 4 λ f ) | a ( x 4 , y 4 ) | 2 .
M 5 ( x , y ) = λ | i 5 ( x 5 , y 5 , λ ) | 2 c ( λ ) d λ = 1 λ 4 f 4 λ | s ( x 0 , y 0 , λ ) | 2 | A ( x 5 + x 0 λ f , x 5 + x 0 λ f ) | 2 c ( λ ) d λ .
M 5 ( x , y ) = x 0 y 0 1 λ 4 f 4 λ | s ( x 0 , y 0 , λ ) | 2 | A ( x 5 + x 0 λ f , x 5 + x 0 λ f ) | 2 c ( λ ) d λ = 1 λ 4 f 4 λ | s ( x 5 , y 5 , λ ) | 2 | A ( x 5 λ f , y 5 λ f ) | 2 Spatial blur c ( λ ) d λ .
M 5 ( x 5 , y 5 ) = ( 1 λ 4 f 4 | s ( x 5 , y 5 , λ ) | 2 ) | A ( x 5 λ c f , y 5 λ c f ) | 2 = I ( x 0 , y 0 ) | A ( x 5 λ c f , y 5 λ c f ) | 2 ,
h λ ( λ ) = | a ( λ f ν 0 ) | 2 , h x ( x ) = | A ( x λ f ) | 2 .

Metrics