Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Snapshot channeled imaging spectrometer using geometric phase holograms

Open Access Open Access

Abstract

In this paper, we present the design and experimental demonstration of a snapshot imaging spectrometer based on channeled imaging spectrometry (CIS) and channeled imaging polarimetry (CIP). Using a geometric phase microlens array (GPMLA) with multiple focal lengths, the proposed spectrometer selects wavelength components within its designed operating waveband of 450–700 nm. Compared to other snapshot spectral imagers, its key components are especially suitable for roll-to-roll (R2R) rapid fabrication, which gives the spectrometer potential for low-cost mass production. The principles and proof-of-concept experimental system of the sensor are described in detail, followed by lab validation and outdoor measurement results which demonstrate the sensor’s ability to resolve spectral and spatial contents under both experimental and natural illumination conditions.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

An imaging spectrometer samples the irradiance across a two-dimensional (2D) scene at multiple wavelengths. The acquired information forms a three-dimensional (3D) datacube denoted by coordinates x, y and λ. Compared to common broadband imaging, the additional spectral information can be utilized for applications in various domains such as remote sensing, biomedical diagnosis, and astronomical imaging [1–3]. Depending on the approaches of acquiring both spatial and spectral contents, imaging spectrometers can be divided into two categories: scanning imaging spectrometers and snapshot imaging spectrometers [4,5].

Scanning imaging spectrometers obtain the 3D datacube through spatial or spectral scanning in a time-sequential fashion. For instance, a whiskbroom or pushbroom imaging spectrometer records selected spectral components of a pixel or a line at a time and completes the 2D scene through physical scanning [6,7], while an interferometric spectral imager, such as the Infrared hyperspectral imaging polarimeter (IHIP) scans through different wavelengths and captures a 2D image at each individual wavelength [8,9]. Different from spectral imaging scanners, snapshot imaging spectrometers acquire the 3D datacube with a single measurement, which often involves trade-offs between spectral resolution and spatial resolution, spectral resolution and signal-to-noise-ratio (SNR), etc. The acquisition time is only limited by the exposure time of the camera. In the recent years, with new optical materials and components becoming available, various snapshot imaging spectrometer designs have been developed. However, compared to scan-based imaging spectrometry technologies, snapshot imaging spectrometry technologies are still scarce [10]. Some of the notable snapshot imaging spectrometers developed in the past 30 years include: the computed tomography imaging spectrometry (CTIS) technique first invented in 1991/1992, which simultaneously projects the image of the 2D scene onto a spatial plane with different directions and reconstruct the spectral information using computed tomography algorithms [11,12]. The image slicing/mapping spectrometer (ISS/IMS) that was first invented for astronomical applications and recently adapted for hyperspectral fluorescence microscopy [13,14]. The coded aperture snapshot spectral imager (CASSI) which combines coded aperture spectrometry with compressed sensing technique [15]. The image replicating imaging spectrometer (IRIS) which is a generalization of Lyot filter [10]. And the snapshot hyperspectral imaging Fourier transform spectrometer (SHIFT) recently developed by Kudenov et. al. [16]. Other generalized architectures are discussed in the review paper of [17].

In this paper, we report a snapshot imaging spectrometer design which simultaneously records both spatial and spectral information using a pair of polarization gratings (PG), a geometric phase microlens array (GPMLA) that has multiple focal lengths, and a common focal plane array (FPA) sensor. Compared to the complicated and hard-to-fabricate components in other snapshot spectral imagers, the PGs and GPMLA in the proposed spectrometer have great potential to be rapidly fabricated with roll-to-roll (R2R) processing due to the recent improvement in geometric phase hologram (GPH) technique [18,19]. Section 2 and section 3 discusses the working principles of the proposed device, system models, and radiometric/spatial resolution tradespace. Section 4 overviews the proof-of-concept experimental setup and its alignment test results. Lastly, section 5 demonstrates outdoor measurement results using the experimental setup and compares them to the results acquired from an Ocean Optics USB2000 modular spectrometer.

2. Principle of operation

The Geometric Phase Hologram Imaging (GPHI) spectrometer modulates all incident light onto a spatial carrier frequency, generated using two parallel PGs, and selects wavelength components by using a multi-focus GPMLA. A schematic of the system is depicted in Fig. 1. Light from the object first transmits through fore-optics made of an objective lens, a field stop and a collimating lens, which eliminates the overlapping of spectral sub-images formed by adjacent lenslets. Light then passes through a linear polarizer LP1, before transmitting through a pair of PGs (PG1 and PG2) separated by a distance t. Both PGs have identical spatial frequencies and are used for creating two chromatically sheared beams [20]. Since both beams have orthogonal polarization states, they are analyzed by a second linear polarizer LP2 to enable interference. A quarter wave plate (QWP) is placed after LP2 to convert the linear polarization state into circular polarization, thus ensuring that only the GPMLA’s positive focal position is used. The light is finally focused onto the FPA by a GPMLA. This is fabricated such that each lenslet focuses a different wavelength a fixed distance f0 away from the GPMLA. This creates a spectral dependence on the defocus modulation transfer function (MTF) of each lenslet, which filters the PG’s achromatic carrier frequency. The spatial modulation enables the individual spectral components to be extracted using 2D Fourier processing, similar to [21].

 figure: Fig. 1

Fig. 1 Schematic of the proposed snapshot imaging spectrometer in the yz plane. Light passes through the gratings and focused by the lenslet array onto the FPA plane.

Download Full Size | PDF

As per [20], the intensity measured by the FPA, in response to the shear generated by PG1 and PG2, can be calculated by

I(x,y,λ)=12[S0(x,y,λ)+S1(x,y,λ)cos(2πξ0x)+S2(x,y,λ)sin(2πξ0x)],
where S0, S1, and S2 are the Stokes parameters corresponding to intensity, linear horizontal over vertical, and linear 45° over 135° polarizations, respectively, and ξ0 is the spatial carrier frequency. This is defined as
ξ0=2mtf0Λ,
where m is the PG’s diffraction order (m = 0, ± 1), t is the separation between the two PGs, f0 is the lens’s focal length, and Λ is the PG’s spatial period.

For this system, LP1 modulates the incident S0 component into S1 such that S1 = S0 and S2 = 0. Furthermore, the spectral content of S0 is also modulated onto the carrier frequency by the GPMLA’s defocus, as modeled by its modulation transfer function versus wavelength. The spectral resolution of each lenslet depends upon a given lenslet’s aperture size, focal length, back focal distance, and the spatial frequency of the carrier. A schematic, describing the pertinent parameters of each lenslet, is illustrated in Fig. 2.

 figure: Fig. 2

Fig. 2 Schematic of a single GPH lenslet focusing two wavelengths: the design wavelength λ0 and the out-of-band wavelength λ1.

Download Full Size | PDF

Two linear systems models were produced based on: (1) geometrical optics; and (2) scalar diffraction theory. We note that, while the geometrical model offers greater error (approximately 10%, as will be discussed shortly), it enables a faster calculation and conceptual understanding of the filtering mechanisms involved with the concept.

As provided in Eq. (1), the shear creates a spatial carrier frequency across the FPA. This is filtered by the lenslet’s MTF response versus wavelength, such that a spectral passband is created via. spatial filtering. The spatial contrast of the carrier was first calculated by geometrical optics in one dimension. The focal length of each wavelength can be calculated as

fλ(λ)=λ0λf0,

From Fig. 2, given Δz = f0 - fλ, the width of the geometrical point spread function (PSF) w for an FPA located at plane f0 imaging a wavelength λ can be calculated as

wD(λλ01),
where D is the lenslet’s diameter. The PSF’s functional form is therefore
PSF(y)=rect(yw),
where rect is a rectangular function as defined by Gaskill [22]. Fourier transformation of this yields the MTF as
MTF(ξ)|sinc(wξ)|,
where ξ is the transform variable of y. The transmission of the lenslet versus wavelength is equal to the MTF after substituting the wavelength-dependent width w as

τ(λ)=|sinc[Dξ0(λλ01)]|.

The full width at half maximum (FWHM) spectral bandwidth of each lenslet is thus

FWHM(λ)=1.2λ0Dξ0.

In addition to the 1D geometrical model, we also produced a more rigorous 2D model based on scalar diffraction theory and the transfer function of free space [22]. The defocus wavefront error, across the pupil, is defined as

W(ρ)=W020ρ2,
where ρ is the normalized pupil radius. The normalized radius is defined as
ρ=rR=xp2+yp2R,
where R = D/2 and xp, yp are the Cartesian pupil coordinates. The wavefront coefficient [23] for defocus can be calculated as
W020(λ)=Δz8λ(Dfλ)2,
which has units of waves at the analysis wavelength λ. The focal shift can be calculated by
Δz(λ)=f0fλ(λ),
and finally, the focal ratio used for the simulation is calculated as

F/#(λ)=fλ(λ)/D.

The PSF was calculated at each wavelength, using the wavefront aberrations defined by Eq. (9) at each value of Δz, corresponding to a given fλ(λ). The MTF was then calculated as the absolute value of the PSF’s Fourier transformation to yield a 3D functionMTF(ξ,η,Δz), where Δz is implicitly dependent on wavelength. The relative spectral transmission function of the lenslet was calculated as

τ(λ)=MTF(0,ξ0,Δz).

The geometric 1D model was used to calculate the FWHM spectral bandwidth as depicted in Fig. 3. This quantity scales inversely with respect to the factor 0 and it scales linearly with respect to the design wavelength. While not explicitly presented here, the scalar diffraction model follows a similar trend with an additional weak dependence on the focal length f0 given a fixed lenslet diameter D. This implies that there is a higher-order focal-ratio dependence that does not appear in the geometrical model. Error between the geometrical and scalar diffraction models, as compared to the experimental data, are provided in section 4. From our results, this focal-ratio dependence increases the geometrical calculation of the FWHM by approximately 10%.

 figure: Fig. 3

Fig. 3 (a) Plot of FWHM spectral bandwidth versus 0, calculated using the geometrical model, for design wavelengths of 400, 550, 650, and 800 nm; (b) Plot of the FWHM versus wavelength for fixed values of 0 of 20, 40, 60, and 80.

Download Full Size | PDF

Finally, the total intensity of the light, accounting for the spectral distribution and the lenslet’s MTF response and integrated over wavelength, can be expressed as

I(x,y)=12λ1λ2[S0(x,y,λ)+S0(x,y,λ)τ(λ)cos(2πξ0x)]dλ,
where λ1 and λ2 denotes the minimum and maximum range of the FPA’s spectral responsivity. As illustrated in Fig. 4, 2-dimensional (2D) Fourier transformation of Eq. (15) yields three channels: C0 at baseband and C1 and its conjugate at a spatial frequency ± ξ0 [20] such that
C0=12λ1λ2S0(ξ,η,λ)δ(ξ,η)dλ,and
C1=14λ1λ2τ(λ)[S0(ξ,η,λ)δ(ξξ0,η)]dλ,
where δ is the Dirac delta function anddenotes the correlation operation. Since C0 contains the spectrally band-integrated information that corresponds to both the focused and defocused light, inverse Fourier transformation is only needed on channel C1 to extract the scene’s datacube. This yields

 figure: Fig. 4

Fig. 4 Fourier space representation of the color channel information contained within each lenslet’s sub-image.

Download Full Size | PDF

I1(x,y)=14λ1λ2τ(λ)S0(x,y,λ)exp(2πξ0x)dλ.

Computationally, this amounts to applying two fast Fourier transforms and a matrix multiplication to apply the bandpass filter. For a datacube on a 5 megapixel image, this can be achieved in 0.18 seconds in Matlab on a Xeon E5-26XX v2 2.4 GHz CPU.

3. Radiometric and spatial resolution tradespace

A radiometric model was created to assess the spectrometer’s signal to noise ratio (SNR) tradespace as related to the carrier frequency and FWHM spectral bandwidth. This tradeoff is more stringent when compared to a multi-aperture filtered camera, since the out-of-band light remains coincident with the in-band light. Thus, the shot noise present in the C0 channel will be distributed into the C1 channel, reducing the overall SNR. A Monte Carlo simulation was produced in which a spatially and spectrally uniform scene was generated. The scene consisted of 100 spectral bands spanning 400 to 800 nm with a signal consisting of 25 e-/nm. Equation (15) was used to simulate the modulated intensity across a 500 × 500 pixel subimage containing 2 µm2 pixels. All calculations were conducted at λ0 = 550 nm for FWHM spectral bandwidths spanning 5-50 nm in 5 nm increments for three spatial carrier frequencies ξ0 of 30, 50, and 80 cycles/mm. Finally, shot noise was simulated across 100 trials and a spatial filter, based on a Hanning window, was used to extract the channel data. The window was defined as

W(ξ,η)=14{1+cos[2π(ξα)wh]}{1+cos[2π(ηβ)wh]},
where wh is the window width and α, are the window’s positions within the Fourier plane. For our simulation, wh = ξ0/2, α = ξ0, and = 0 such that the C1 channel is extracted with a maximum spatial resolution while maintaining low cross-talk from the C0 channel. It should be noted that, while the C0 channel contains all residual unmodulated light, it is highly defocused. Unlike channeled spatial polarimetry systems, where there is an equally intense but in-focus S0 component that causes significant cross-talk into the channel containing S1 and S2, the inherent defocus minimizes cross-talk by band-limiting its spatial frequency content [24]. Finally, the SNR was calculated for each case and averaged across all trials, the results of which are depicted in Fig. 5 as dashed lines for each simulated ξ0. As expected, the SNR decreases with decreasing FWHM spectral bandwidth. At a 5 nm FWHM, the SNR spans 58-67 while at 50 nm, it spans 340-473 across the simulated range of ξ0.

 figure: Fig. 5

Fig. 5 Results of a Monte Carlo simulation, indicating the signal to noise ratio (SNR) tradespace of the sensor compared to a standard filtered camera. Solid lines indicate a dichroic filter (D) while dashed lines indicate spectral filtering via. spatial modulation (S).

Download Full Size | PDF

For comparison, the same Monte Carlo analysis was also used to calculate the SNR of a dichroic filtered camera assuming the filter had a peak transmission of 70% and the same sinc transmission function τ, per Eq. (14), was used. Furthermore, the same window function in Eq. (19) was used such that the spatial noise bandwidth was identical for both systems; however, the filter was placed around the baseband C0 channel with wh = ξ0/2, α = 0, and = 0. Results from this simulation are depicted in Fig. 5 as solid lines. Also as expected, the SNR is higher for the dichroic filter. At a 5 nm FWHM, the SNR spans 370-570 while at 50 nm, it spans 900-1500 across the simulated range of ξ0.

From these results, a multi-aperture filtered camera (MAFC) obtains approximately 7.5 times greater SNR than the GPMLA spectral filtering technique. Additionally, it offers a higher potential spatial resolution. Therefore, the advantages of the GPMLA lie in fabrication and size. Specifically, the MAFC requires refractive lenslets paired with a wide range of dichroic filters, each of which must be fabricated with a different process; in this case, 100 different filters. Note that, while wedge filtering techniques also exist that reduce the number of filters needed, their FWHM spectral bandwidth is usually limited and has a strong-tradeoff against the spectrometer’s total spectral range (e.g., 5 nm FWHM can be achieved over a 50 nm bandwidth, or 30 nm over 300 nm) [25]. Conversely, the GPMLA can be fabricated with a single process that can be made roll-to-roll and both the spatial image formation and spectral filtering optics are fabricated simultaneously, enabling a reduced cost spectral imaging solution. Such devices could be paired with e.g., mobile phone cameras to enable spectral imaging at reduced cost, and all components can be laminated into planar structures and distributed as pre-aligned monolithic modules. A final analysis includes assessing the total number of required FPA pixels needed to obtain a sub-image of a certain size (Nx × Ny pixels) given a margin of s pixels between sub-images and a total number of Nw spectral bands. In accordance with the analysis of [17], the total number of FPA pixels required is

M=(Nx+2s)(Ny+2s)NW.
which is identical to a standard MAFC.

4. Experimental setup and lab validation

To validate the principles explained in previous sections, a proof-of-concept experimental setup was built on the benchtop, as per Fig. 6. The GPMLA proof of concept used 6 × 7 square lenslets that were designed to focus spectra from 400 to 810 nm in 10 nm increments. Each lenslet had a size of 1 × 1 mm and the GPMLA was mounted on an xy translational stage to allow easier alignment between the lenslets and the camera plane. The distance between the lenslet array and the camera’s photosensitive area was 8.8 mm, which is the nominal value of f0 in the prior models. The camera was an Allied Vision (AVT) GX2750 GigE camera with a 4.54 × 4.54 µm pixel size. The two PGs had a spatial period of Λ = 100 µm and were separated by a distance t = 12.8 mm, yielding a carrier frequency of ξ0 = 29 cycles/mm. Thus, the proof of concept had a maximum spatial resolution of 14.5 cycles/mm after spatial filtering. It should be noted that, with further optimization within a future experimental prototype, the carrier frequency could be reasonably increased to 55 cycles/mm (or more generally, 4 pixels per fringe) for a maximum spatial cutoff frequency of 27.5 cycles/mm. This would require PGs with a spatial period of Λ = 52.7 µm using the same separation t = 12.8 mm or, alternatively, achieving more compactness using a smaller spatial period Λ = 4 µm with t = 1 mm.

 figure: Fig. 6

Fig. 6 Photo of the experimental setup on the benchtop.

Download Full Size | PDF

For spectral calibration, the center wavelength of each lenslet must be characterized. This was achieved by illuminating an integrating sphere with a monochromator and configuring the system to view the sphere’s exit port. The monochromator was then scanned for wavelengths spanning 400-700 nm in 5 nm increments; although only 470-700 nm were used in our reconstructions due to the lower blue response of our setup. A view of raw data from the monochromator experiment is presented in Figs. 7(a)-7(c), illustrating how the interference fringe’s contrast moves between the lenslets as the wavelength is increased to 470, 550, and 650 nm, respectively.

 figure: Fig. 7

Fig. 7 Monochromator data collected from the sensor at (a) 470 nm; (b) 550 nm; and (c) 650 nm for a subset of 4 × 7 lenslets. The region of maximum spatial contrast transitions across the array due to the spatial filtering of the carrier frequency by the GPMLA’s different focal lengths.

Download Full Size | PDF

The monochromator’s measurements were used to quantify the spectral transmission of three lenslets in the array. These were then compared to the predicted spectral transmittances, developed previously, using both the geometrical and scalar diffraction models. A view of the spectral transmittances versus wavelength for the 470 nm, 570 nm, and 650 nm lenslets are illustrated in Fig. 8(a). As expected, the 2D scalar diffraction model offers a closer match to the measured values than the simpler 1D geometric model. To further quantify this, the FWHM spectral bandwidth of each lenslet was calculated by fitting an ideal sinc function to the normalized spectral transmission functions, measured by the monochromator. A plot of the FWHM spectral bandwidth, calculated for each lenslet, as compared to the geometrical and scalar diffraction models’ predictions are presented in Fig. 8(b). RMS error between the experimental results and the diffraction and geometrical models was calculated to be 0.56 nm and 4.29 nm, respectively. For this system, the geometrical model under-estimates the FWHM spectral bandwidth by a factor of 0.86 across the spectral passband. However, its relative simplicity offers rapid first order calculations, while simultaneously offering a conceptual framework for the sensor’s spatial-spectral filtering mechanism.

 figure: Fig. 8

Fig. 8 (a) Measured and predicted spectral transmittances versus wavelength for the 470 nm, 570 nm, and 650 nm lenslets; (b) Theoretical and predicted FWHM spectral bandwidths for each lenslet versus wavelength.

Download Full Size | PDF

To verify the device’s calibration, red, green, white, blue and yellow NIST-traceable spectralon tiles were imaged. In the lab, data images of the spectralon tiles were captured by the prototype under the same illumination provided by a xenon lamp. The spectral images were obtained by applying FFT, Fourier domain filtering and inverse FFT to the data images as described earlier. To compare the results from the proof-of-concept setup with corresponding NIST standards, image registration must be applied to spectral sub-images on the spectral images. To calculate image registration coefficients, a 2D target with high spatial frequency content was recorded by the spectrometer and the full image was divided into individual spectral sub-images. Using the spectral sub-image at 520 nm as a primary reference, the image registration coefficients were calculated by applying a spatial image registration algorithm to all the other spectral sub-images [16]. The coefficients were then applied to spectralon tiles’ results.

The reflectance of each colored tile was calculated by normalizing the intensities on its spectral sub-images to the intensities of the white tile. Figure 9 depicts the reflectance curves measured by the prototype along with corresponding NIST standard curves for the same tiles. The measured curves were averaged from a 10 × 10 pixel area on the spectral sub-images. As can be seen, from 470 nm to 740 nm, the measured reflectance curves closely fit the NIST standard curves, indicating that each lenslet was only bringing polarization fringes at its designated wavelength to focus. RMS error, calculated between the measured and standard curves, is provided in Table 1.

 figure: Fig. 9

Fig. 9 The normalized NIST traceable reflectance of the blue (dashed-dotted line), green (solid line), yellow (dashed line), and red (dotted line) spectralon tiles, as compared to the measured (M) values.

Download Full Size | PDF

Tables Icon

Table 1. RMS error calculated for each color tile from 470 to 720 nm.

5. Outdoor imaging and measurement results

In addition to the lab validation, the prototype was also used to image an outdoor environment to test its ability of resolving spatial and spectral content, of a 2D scene, under typical sunlight illumination. An outdoor scene that contained a blue car, plants, and red brick, as depicted in Fig. 10(a), was measured and processed with the aforementioned procedures. The 28 lenslets on the GPMLA were arranged as illustrated in Fig. 10(b), in which each lenslet is denoted by the wavelength of primary focus. The raw data image, as captured by the FPA, is depicted in Fig. 10(c) and a magnified view of one sub-image is presented in Fig. 10(d). This demonstrates the carrier frequency associated with the scene’s 650 nm light. Finally, Fig. 10(e) depicts the spatially filtered spectral sub-images.

 figure: Fig. 10

Fig. 10 (a) RGB image of the outdoor scene. (b) The locations and focused wavelengths of the lenslets on the GPMLA. (c) Raw data image captured by the AVT GX2750 camera. (d) Enlarged view of a sub-image from raw data, showing the polarization fringes. (e) Filtered spectral image.

Download Full Size | PDF

After image registration, the spectra of three locations in the scene, which belong to the (1) car; (2) brick; and (3) plants, as illustrated in Fig. 11(a), were generated by normalizing their spectral intensities to that of the scene’s asphalt. The asphalt’s reflectivity was separately calibrated against a NIST-traceable white spectralon tile under the same illumination conditions. Spectral responses from these three locations were also measured by an Ocean Optics USB2000 modular spectrometer as references. The normalized spectral curves from the two spectrometers are drawn together in Figs. 11(b)-11(d). The RMS error, calculated for the car’s, brick’s, and plant’s reflectivity was calculated to be 9.2%, 6.1%, and 9.2%, respectively, for wavelengths spanning 475-700 nm. These errors are generally higher than the indoor testing, by a factor of 2.5 on average, which is likely due to issues related to the use of 10 × 10 pixel averaging that was performed in the laboratory data as opposed to the 1 × 1 pixel sampling demonstrated here. We also attribute some issues due to the use of off-the-shelf optomechanics, configured using free-space holders, which could be the cause of some artifacts in the data – most notably around 550 and 620 nm. These peaks appear in both the brick and leaf reflectivity and may be indicative of a systematic error in the setup. Future work will focus on developing a higher order model of these errors within the framework of a more robust prototype sensor.

 figure: Fig. 11

Fig. 11 (a) Image of the original scene; (b-d) Calculated reflectivity from the GPMLA compared to the OO spectrometer for car (position 1), brick (position 2), and leaf (position 3).

Download Full Size | PDF

6. Conclusion

A Geometric Phase Hologram Imaging (GPHI) spectrometer has been presented in this paper, together with theoretical models and experimental validation results. While the 2D scalar diffraction model provides predictions that more closely match the experimental spectral results, the simpler 1D geometric model supports rapid calculations and provides a conceptual framework for the sensor’s filtering mechanism. A proof-of-concept experimental system was built and calibrated in lab. The calibrated system showed good accuracy in measuring the spectral characteristics of NIST-traceable spectralon tiles in a lab environment. To validate the experimental system’s performance with natural illumination, it was used to record an outdoor scene under sunlight. The results were compared to those from an Ocean Optics USB2000 spectrometer, which showed good conformity. Potential application fields of the proposed spectrometer include agricultural sensing, remote sensing, biomedical imaging, food inspection, and quality control. Compared to the existing technologies, the configuration of our system is especially suitable for high speed R2R manufacturing process, which gives it a potential advantage in manufacturing speed and cost.

Funding

National Institutes of Health award number 1R21EY022174.

References

1. J. S. Tyo, D. L. Goldstein, D. B. Chenault, and J. A. Shaw, “Review of passive imaging polarimetry for remote sensing applications,” Appl. Opt. 45(22), 5453–5469 (2006). [CrossRef]   [PubMed]  

2. W. R. Johnson, D. W. Wilson, W. Fink, M. Humayun, and G. Bearman, “Snapshot hyperspectral imaging in ophthalmology,” J. Biomed. Opt. 12(1), 014036 (2007). [CrossRef]   [PubMed]  

3. L. Weitzel, A. Krabbe, H. Kroker, N. Thatte, L. E. Tacconi-Garman, M. Cameron, and R. Genzel, “3D: The next generation near-infrared imaging spectrometer,” Astron. Astrophys. Suppl. Ser. 119(3), 531–546 (1996). [CrossRef]  

4. R. G. Sellar and G. D. Boreman, “Classification of imaging spectrometers for remote sensing applications,” Opt. Eng. 44(1), 013602 (2005). [CrossRef]  

5. M. R. Descour, C. E. Volin, E. L. Dereniak, K. J. Thome, A. B. Schumacher, D. W. Wilson, and P. D. Maker, “Demonstration of a high-speed nonscanning imaging spectrometer,” Opt. Lett. 22(16), 1271–1273 (1997). [CrossRef]   [PubMed]  

6. C. G. Simi, E. M. Winter, M. M. Williams, and D. C. Driscoll, “Compact airborne spectral sensor (COMPASS),” in Algorithms for Multispectral, Hyperspectral, and Ultraspectral Imagery VII SPIE, 4381, pp. 129–137 (2001).

7. M. W. Kudenov, M. E. Lowenstern, J. M. Craven, and C. F. LaCasse, “Field deployable pushbroom hyperspectral imaging polarimeter,” Opt. Eng. 56(10), 103107 (2017). [CrossRef]  

8. J. Craven-Jones, M. W. Kudenov, M. G. Stapelbroek, and E. L. Dereniak, “Infrared hyperspectral imaging polarimeter using birefringent prisms,” Appl. Opt. 50(8), 1170–1185 (2011). [CrossRef]   [PubMed]  

9. Y. Wang, M. W. Kudenov, and J. Craven-Jones, “Phase error in Fourier transform spectrometers employing polarization interferometers,” in Polarization: Measurement, Analysis, and Remote Sensing XI SPIE, 9099, p. 90990K (2014).

10. A. Gorman, D. W. Fletcher-Holmes, and A. R. Harvey, “Generalization of the Lyot filter and its application to snapshot spectral imaging,” Opt. Express 18(6), 5602–5608 (2010). [CrossRef]   [PubMed]  

11. T. Okamoto and I. Yamaguchi, “Simultaneous acquisition of spectral image information,” Opt. Lett. 16(16), 1277–1279 (1991). [CrossRef]   [PubMed]  

12. M. R. Descour, C. E. Volin, E. L. Dereniak, T. M. Gleeson, M. F. Hopkins, D. W. Wilson, and P. D. Maker, “Demonstration of a computed-tomography imaging spectrometer using a computer-generated hologram disperser,” Appl. Opt. 36(16), 3694–3698 (1997). [CrossRef]   [PubMed]  

13. L. Gao, R. T. Kester, N. Hagen, and T. S. Tkaczyk, “Snapshot Image Mapping Spectrometer (IMS) with high sampling density for hyperspectral microscopy,” Opt. Express 18(14), 14330–14344 (2010). [CrossRef]   [PubMed]  

14. L. Gao, R. T. Smith, and T. S. Tkaczyk, “Snapshot hyperspectral retinal camera with the Image Mapping Spectrometer (IMS),” Biomed. Opt. Express 3(1), 48–54 (2012). [CrossRef]   [PubMed]  

15. A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 47(10), B44–B51 (2008). [CrossRef]   [PubMed]  

16. M. W. Kudenov and E. L. Dereniak, “Compact real-time birefringent imaging spectrometer,” Opt. Express 20(16), 17973–17986 (2012). [CrossRef]   [PubMed]  

17. N. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng. 52(9), 090901 (2013). [CrossRef]  

18. G. T. McCollough, C. M. Rankin, and M. L. Weiner, “6.1: Roll-to-Roll manufacturing considerations for flexible, Cholesteric Liquid Crystal (ChLC) Display Media,” SID Symp. Dig. Tech. Pap. 36(1), 64–67 (2005). [CrossRef]  

19. H. Sato, K. Miyashita, M. Kimura, and T. Akahane, “Study of liquid crystal alignment formed using slit coater,” Jpn. J. Appl. Phys . 50(1S2), 01BC16 (2011).

20. M. W. Kudenov, M. J. Escuti, E. L. Dereniak, and K. Oka, “White-light channeled imaging polarimeter using broadband polarization gratings,” Appl. Opt. 50(15), 2283–2293 (2011). [CrossRef]   [PubMed]  

21. M. W. Kudenov, M. E. L. Jungwirth, E. L. Dereniak, and G. R. Gerhart, “White-light Sagnac interferometer for snapshot multispectral imaging,” Appl. Opt. 49(21), 4067–4076 (2010). [CrossRef]   [PubMed]  

22. J. D. Gaskill, Linear Systems, Fourier Transforms, and Optics (IET, 1978).

23. R. Kingslake and R. R. Shannon, eds., Applied Optics and Optical Engineering. (Academic, 1992, Vol. XI, Chap. 4).

24. Y. Wang, M. Kudenov, A. Kashani, J. Schwiegerling, and M. Escuti, “Snapshot retinal imaging Mueller matrix polarimeter,” in Polarization Science and Remote Sensing VII SPIE, 9613, p. 96130A (2015).

25. T. A. Mitchell and T. W. Stone, “Compact snapshot multispectral imaging system,” United States patent US8027041B1 (September 27, 2011).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 Schematic of the proposed snapshot imaging spectrometer in the yz plane. Light passes through the gratings and focused by the lenslet array onto the FPA plane.
Fig. 2
Fig. 2 Schematic of a single GPH lenslet focusing two wavelengths: the design wavelength λ0 and the out-of-band wavelength λ1.
Fig. 3
Fig. 3 (a) Plot of FWHM spectral bandwidth versus 0, calculated using the geometrical model, for design wavelengths of 400, 550, 650, and 800 nm; (b) Plot of the FWHM versus wavelength for fixed values of 0 of 20, 40, 60, and 80.
Fig. 4
Fig. 4 Fourier space representation of the color channel information contained within each lenslet’s sub-image.
Fig. 5
Fig. 5 Results of a Monte Carlo simulation, indicating the signal to noise ratio (SNR) tradespace of the sensor compared to a standard filtered camera. Solid lines indicate a dichroic filter (D) while dashed lines indicate spectral filtering via. spatial modulation (S).
Fig. 6
Fig. 6 Photo of the experimental setup on the benchtop.
Fig. 7
Fig. 7 Monochromator data collected from the sensor at (a) 470 nm; (b) 550 nm; and (c) 650 nm for a subset of 4 × 7 lenslets. The region of maximum spatial contrast transitions across the array due to the spatial filtering of the carrier frequency by the GPMLA’s different focal lengths.
Fig. 8
Fig. 8 (a) Measured and predicted spectral transmittances versus wavelength for the 470 nm, 570 nm, and 650 nm lenslets; (b) Theoretical and predicted FWHM spectral bandwidths for each lenslet versus wavelength.
Fig. 9
Fig. 9 The normalized NIST traceable reflectance of the blue (dashed-dotted line), green (solid line), yellow (dashed line), and red (dotted line) spectralon tiles, as compared to the measured (M) values.
Fig. 10
Fig. 10 (a) RGB image of the outdoor scene. (b) The locations and focused wavelengths of the lenslets on the GPMLA. (c) Raw data image captured by the AVT GX2750 camera. (d) Enlarged view of a sub-image from raw data, showing the polarization fringes. (e) Filtered spectral image.
Fig. 11
Fig. 11 (a) Image of the original scene; (b-d) Calculated reflectivity from the GPMLA compared to the OO spectrometer for car (position 1), brick (position 2), and leaf (position 3).

Tables (1)

Tables Icon

Table 1 RMS error calculated for each color tile from 470 to 720 nm.

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

I( x,y,λ )= 1 2 [ S 0 ( x,y,λ )+ S 1 ( x,y,λ )cos( 2π ξ 0 x )+ S 2 ( x,y,λ )sin( 2π ξ 0 x ) ],
ξ 0 = 2mt f 0 Λ ,
f λ ( λ )= λ 0 λ f 0 ,
wD( λ λ 0 1 ),
PSF( y )=rect( y w ),
MTF( ξ )| sinc( wξ ) |,
τ( λ )=| sinc[ D ξ 0 ( λ λ 0 1 ) ] |.
FWHM( λ )= 1.2 λ 0 D ξ 0 .
W( ρ )= W 020 ρ 2 ,
ρ= r R = x p 2 + y p 2 R ,
W 020 ( λ )= Δz 8λ ( D f λ ) 2 ,
Δz( λ )= f 0 f λ ( λ ),
F/#( λ )= f λ ( λ )/D .
τ( λ )=MTF( 0, ξ 0 ,Δz ).
I( x,y )= 1 2 λ 1 λ 2 [ S 0 ( x,y,λ )+ S 0 ( x,y,λ )τ( λ )cos( 2π ξ 0 x ) ] dλ,
C 0 = 1 2 λ 1 λ 2 S 0 ( ξ,η,λ ) δ( ξ,η )dλ, and
C 1 = 1 4 λ 1 λ 2 τ( λ )[ S 0 ( ξ,η,λ )δ( ξ ξ 0 ,η ) ] dλ,
I 1 ( x,y )= 1 4 λ 1 λ 2 τ( λ ) S 0 ( x,y,λ )exp( 2π ξ 0 x ) dλ.
W( ξ,η )= 1 4 { 1+cos[ 2π( ξα ) w h ] }{ 1+cos[ 2π( ηβ ) w h ] },
M=( N x +2s )( N y +2s ) N W .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.