A dual-channel lateral shearing beam splitter was used in a Fourier transform imaging spectrometer, forming a dual-channel imaging spectrometer, to investigate the usability of this technique for large field-of-view (FOV) spectral detection. The large FOV obtained by stitching together the different channels’ individual FOVs greatly improved the spectral detection efficiency for large-area targets. This report describes the principle of the dual-rectangle lateral shearing beam splitter and the analysis of the lateral shearing distance, FOV, modulation, and method of dual-channel stitching. Large-FOV spectral images of a scene were acquired experimentally at visible wavelengths, confirming the effectiveness of this technique.
© 2016 Optical Society of America
Imaging spectroscopy is a type of multi-dimensional information-processing technology that provides target information as three-dimensional data cubes, which include two dimensions of spatial information and a third dimension of spectral information. Not only the geometric shapes of targets, but also their physical properties, can be obtained by imaging spectrometry, and the target characteristics can be analyzed quantitatively. The high detection sensitivity of imaging spectroscopy has made it a well-recognized technique that is widely used in scientific applications, such as astrophysics, resource censuses, global pollution and disaster monitoring, atmospheric composition detection, and other remote sensing applications [1–5]. An imaging spectrometer based on dispersion was the first to be applied to remote sensing [6,7]. However, the slit in this type of system causes drawbacks such as low throughput, signal-to-noise-ratio (SNR) reduction, and decreased spatial resolution. Fourier transform imaging spectrometers had their genesis in the late 1980s after the invention of charge-coupled devices (CCDs) and were further developed in remote-sensing programs [8–13]. In this type of spectrometer, a two-dimensional spatial intensity distribution superimposed with a fringe pattern is obtained by inserting a lateral shearing interferometer into an infinite imaging system; then, the spectrum of a scene can be reconstructed by performing a Fourier transform [14–18].
The wide fields-of-view (FOVs) of imaging spectrometers contribute to improving the efficiency of remote sensing detection by enabling the acquisition of wide-format images. The FOVs of dispersive imaging spectrometers are generally on the order of tens of degrees and may even reach 100°. For instance, the fourth-generation advanced airborne hyperspectral imaging spectrometer developed by Science and Technology International provides a 40° FOV ; the airborne portable remote imaging spectrometer developed by the Jet Propulsion Laboratory for ocean spectral imaging has a 36° FOV ; compact airborne spectrographic imager developed in Canada has a 35.4° FOV ; the medium-resolution imaging spectrometer developed in France has a 68.5° FOV ; and the ion mobility spectrometer developed in the Netherlands has a FOV of up to 100° . The FOVs of Fourier transform imaging spectrometers are generally on the order of several degrees [24–28]. For example, the scanning mirror interference imaging spectrometer has a 9° FOV, as reported by Lawrence Livermore National Laboratory of America; the wind imagining interferometer, developed by Canada and France to measure the upper atmospheric wind field and launched with the Upper Atmosphere Research Satellite, has an 8° × 6° FOV; the digital array scanning interference imaging spectrometer supported by NASA has a 5° FOV; the airborne Fourier transform hyperspectral imager developed by American Kestrel Company and Florida Institute of Technology has a 7.5° FOV; the Aerospace Leap-frog Imaging Stationary interferometer for Earth Observation, supposed by the Italian Space Agency, has a 4.3° FOV; the Caméra Hyperspectrale de Démonstration, developed in France, has a 4° FOV; and the hyperspectral cameras developed by Telops in Canada have 6.4° × 5.1° FOVs.
Fourier transform imaging spectrometers without slits have the advantages of high throughputs and high spatial resolutions due to the absence of exit slits [11,12,29] and are viable alternatives to dispersive instruments in the visible and infrared spectral ranges. However, the applications of Fourier transform imaging spectrometers are limited because of the low efficiencies caused by their narrow FOVs. Thus, it is necessary to develop a Fourier transform imaging spectrometer with a larger FOV for remote sensing .
For this purpose, a novel wide-FOV hyperspectral imaging interferometer using dual-channel stitching is presented in this report. A dual-rectangle lateral shearing beam splitter is employed to split the incident light into two pairs of parallel beams; then, two interferometric images with different FOVs can be obtained from the two channels. The spectrum of a scene can subsequently be reconstructed by performing a Fourier transform. Wide-format imaging can be realized by stitching the FOVs of the two channels to improve the spectral detection efficiency significantly for remote sensing.
The remainder of this report is organized as follows. First, the principle of the dual-rectangle lateral shearing beam splitter is introduced. Then, the lateral shearing distance, FOV, modulation, and dual-channel stitching of the system are discussed in detail. Lastly, an experiment conducted to confirm the effectiveness of this system is described, and the experimental results in the visible wavelength range are outlined.
2. Principles of dual-channel FOV stitching imaging spectroscopy
To set up the proposed dual-channel FOV stitching imaging spectrometer, a dual-rectangle lateral shearing beam splitter is inserted into an infinite imaging system along with two CCDs, as shown in Fig. 1(a). Collimated light from the scene passes through the lateral shearing beam splitter and is split into two pairs of parallel beams that travel different paths and are subsequently recombined by imaging lenses. Then, two-dimensional spatial intensity distributions from the two different FOVs that are superimposed with the fringe pattern are obtained by the two detectors, and a Fourier transform is performed to reconstruct the spectrum of the scene.
The dual-channel lateral shearing beam splitter is composed of two plane beam splitters, BS1 and BS2, and four reflectors, M1, M2, M3, and M4, as illustrated in Fig. 1(b). The incident light is split into two beams, one of which is reflected by M1 and M2, while the other is reflected by M3 and M4. The two beams finally pass through BS2, and two channels corresponding to the up-FOV and down-FOV are provided, as shown in Fig. 1(b). The beam splitter is called a dual-rectangle lateral shearing beam splitter because the optical paths form the shapes of two rectangles, as depicted in Fig. 1(b). M1 is perpendicular to M2, and M3 is perpendicular to M4. M1, BS1, BS2, and M4 are parallel to each other. When M3 and M4 are at the positions of dashed lines in Fig. 1(b), M1, M2, and BS1 are symmetric to M3, M4, and BS2. The beams emerging from BS2 coincide in this case. Lateral shearing occurs when M3 and M4 are translated up from positions of dashed lines to that of solid lines by a distance dʹ, while the lateral shearing distance is d. The unfolded optical layout of the dual-rectangle lateral shearing beam splitter is shown in Fig. 2.
The light travels separated through the dual-rectangle lateral shearing beam splitter and passes BS1 and BS2 each only once; in other words, the light does not backtrack, which would cause a decrease in energy. The two channels of the beam splitter can be used to achieve wide-format imaging by stitching together the different FOVs of the two channels, thereby greatly improving the spectral detection efficiency for remote sensing.
Different positions in the scene correspond to different incident light angles θ, which is composed of an incident angle θx that is measured relative to a line parallel to the lateral shearing direction and an incident angle θy that is measured relative to a line perpendicular to the lateral shearing direction. The optical path difference between the two sheared beams in each channel changes with θx, so different points on the imaging plane correspond to different optical path differences, enabling images of the scene superimposed with the fringe pattern to be obtained.
Different FOVs are obtained by tilting one of the CCD sensors up and the other down, as shown in Fig. 3. The shearing direction of lateral shearing beam splitter is perpendicular to the figure plane. When L4 and CCD1 are tilted up with an appropriate angle as shown in Fig. 3(a), CCD1 will acquire images superimposed with fringe pattern of the down-FOV of the scene. And when L3 and CCD2 are tilted down with an appropriate angle as shown in Fig. 3(b), CCD2 will acquire images superimposed with fringe pattern of the up-FOV of the scene.
The intensity distributions of the two FOVs can be expressed as
The interferogram of each point of the scene is recorded as it moves through the FOV, and the spectral information is calculated from the interferogram using the Fourier transforms
The scene appears to shift between subsequent images, while the fringe pattern is stationary, as illustrated in Fig. 4. Thus, image registration is necessary to obtain the interferogram of each point, because the same point from the scene has different coordinates in different images.
3. System analysis for dual-channel interferometric imaging spectroscopy
3.1 Lateral shearing distance
The lateral shearing distance d, in addition to θ, determines the optical path difference. d is also associated with the translation of M3 and M4 by dʹ, as shown in Fig. 5. The relationship between d and dʹ can be obtained using a ray tracing procedure.
As illustrated in Fig. 5, M3 is perpendicular to M4, and the angle between M3 and the horizontal is π/4. Considering the general case, the incident angle measured with respect to M3 is α. The incident beam travels along ABC before the translation of M3 and M4. Then, the emerging beam and the light reflected by M1 and M2 coincide after passing BS2. The incident beam travels along A′B′C′ when M3 and M4 are moved by dʹ to the position of the dotted line, and the emerging beam is translated by d, which is equal to the length of line OQ. The relationship between d and dʹ can be obtained by geometric regularity:Equation (6) can be rewritten as d = 2dʹ by simplifying the trigonometric functions. The maximum value of dʹ is H/7, and the largest value of d is 2H/7, as discussed in Appendix B, where H is the aperture of each reflector.
3.2 Modulation of interferometric images
The modulation of the interferometric images strongly influences the SNR of the fringe pattern, which indicates the efficiency of the input radiation utilization. The interferometric images from the two up-FOV and down-FOV have modulations of and , respectively. and are composed of and and and , respectively. and are associated with the reflectances of the beam splitters and reflectors, while and depend on the overlap between the two Airy disks on the image plane formed by the sheared beam.
The reflectivity amplitudes of the s- and p-polarized beams from the beam splitters in the system greatly affect the modulation. Assuming that rBS1-s, rBS2-s, rM1-s, rM2-s, rM3-s, and rM4-s are the reflectivity amplitudes of the s-polarized beams from BS1, BS2, M1, M2, M3 and M4, respectively, and that rBS1-p, rBS2-p, rM1-p, rM2-p, rM3-p, and rM4-p are the reflectivity amplitudes of the p-polarized beams from BS1, BS2, M1, M2, M3, and M4, respectively, then and can be respectively expressed as
Assuming that the reflectors in the system have reflectances of approximately 1 and that
The incident light goes through or is reflected by the beam splitter film only once in the down-FOV channel, so the emerging beams have the same polarization and is maximized in any situation.
Light from point S of the scene will be separated into two coherent interfering beams and create two Airy disks in the imaging plane after passing through the lateral shearing beam splitter . The two Airy disks will not entirely coincide with one another if there is an error of ε in the angle between M3 and M4, as shown in Fig. 6. In this case, the angle between M3 and M4 is π/2 + ε, while the expected angle is π/2. Assuming that the expected incident angles between the separated beams from the two FOVs and the imaging lens are and , the total error is 2ε.
In the ideal case in which ε = 0, and have their maximum values of 1, as discussed in Appendix A. As the angular error increases, the distances between the centers of the Airy disks will increase, and will decrease, and the image quality will decrease. The two Airy disks would be completely separated if the distance between their centers were greater than the radius of each disk, according to the Rayleigh criterion. The corresponding maximum value of ε is given by
4. Image registration and image stitching
4.1 Fast sub-pixel interference image registration
The system acquires a series of interferometric images using the push-broom method. Image registration is necessary to obtain the interferogram of each point because the coordinates of each point in the FOV scene are different in each image.
It is assumed that the intensity distribution of the interferometric image is I(x, y) and that F(u, v) is the Fourier transform of I(x, y). According to the Fourier transform principle, the Fourier transform of I(x-x0, y-y0) is F(u, v)exp(-j(ux0 + vy0)), where j is the imaginary unit. If the intensity distribution of the first interferometric image is I1(x, y), then the intensity distribution of the interferometric image that has a translation of (x0, y0) from I1(x, y) can be expressed asEq. (11) is
Since the inverse transform of exp(j(ux0 + vy0)) is a Dirac delta function centered at (x0, y0), the translation of I2(x, y) can be determined simply by locating the pulse peak in the inverse Fourier transform of the cross-power spectrum. The registration is performed by translating I2(x, y) by (-x0, -y0).
Interference image registration is a process of geometric matching for the scene in an image. Pixel image registration cannot yield the required interferogram extraction accuracy. The fast sub-pixel image registration algorithm based on partial up-samples avoids the significant memory usage of the global up-sample approach and has a much shorter computation time .
In the fast sub-pixel image registration algorithm, the initial peak location (x0, y0) is first refined by a phase correlation algorithm. Then, a cross-correlation up-sampled by a factor of k is computed in a b × b pixel (1 < b < 2) region (in units of original pixels) about the initial estimate. Sub-pixel registration is achieved by searching for the peak in the b·k × b·k partial up-sampled array (in units of up-sampled pixels) with an accuracy of 1/k pixels (in units of original pixels). Assuming that the peak location in the partial up-sampled array is (Δx0, Δy0), the corrected initial peak location can be expressed as
This algorithm achieves interference image registration with high accuracy and high efficiency without excessive resource usage. After image registration, a point of the scene has the same coordinates in each interferometric images, and the interferogram of the point can be extracted easily.
4.2 FOV stitching
Images of the scene from different FOVs are acquired by the system. Wide-format images can then be obtained by FOV stitching, thereby significantly improving the spectral detection efficiency for remote sensing.
The shift of the region in which the images from each channel overlap is calculated and used for FOV stitching. It is assumed that the image size is M × N pixels. The m rows at the bottom of the image from the up-FOV and m rows at the top of the image from the down-FOV, which constitute the overlap region, are extracted to calculate the translation, as shown in Fig. 7. The overlap region includes m-Sv rows with a horizontal displacement of Sh pixels if the vertical shift is Sv pixels and the horizontal shift is Sh pixels. The stitched image can be obtained by combining the M-m + Sv rows at the top of image from the up-FOV, the image fused by overlap region of the two images, and the M-m + Sv rows at the bottom of the image from the down-FOV with a horizontal displacement of Sh pixels.
5. Experiment and results
The experimental setup illustrated in Fig. 8 was constructed according to the schematic depicted in Fig. 1. The dual-channel lateral shearing beam splitter was composed of four reflectors, M1, M2, M3, and M4, and two plane beam splitters, BS1 and BS2. The reflectivity of each of the reflectors was 99.9% from 450 nm to 700 nm, and these were placed in precision optical adjusting brackets. The angular error was controlled to be less than 1.4′ by applying the methods discussed in Appendix A to maintain the imaging performance of the system. M3 and M4 were placed on linear translation stages and translated 400 μm from the position at which the beams from the two channels coincided. The lateral shearing distance was calculated to be 800 μm using Eq. (6). No beam was stopped since the locations of BS1 and BS2 were not beyond the critical position discussed in Appendix B. A split ratio of 1:1 from 450 nm to 700 nm was used to maximize the visibility of the fringe pattern superimposed on the image, as discussed in Section 3.2. The focal length of each of the lenses was 75 mm, and lenses 1 and 2 made up the relay optical system. The two detectors were both BaumerSXG10 models, which have resolutions of 1024 × 1024 pixels and pixel sizes of 5.5 μm. The optical path difference was 30 μm. The object lens had a 9.7° FOV, while the FOV of the imaging lenses were 4.2°. The FOV of the technique stitched by two different FOVs of the two channels was 7.8°.
An experiment was carried out to test the OPD linearity of the FOV. A laser of wavelength 638 nm was used as the source of the experimental setup. Part of the fringe pattern along with the experimental estimations of the OPD and a reference OPD curve obtained with theoretical analysis are shown in Fig. 9(a). The match between the two curves is highly consistent as the maximum deviation of the experimental OPD is less than λ/10 as shown in Fig. 9(b). Simulations of spectrum reconstruction for a signal of 638 nm were performed under the linear and nonlinear conditions to test the effect of the nonlinear OPD to our system. The acquired spectra are shown in Fig. 10. The central wavelength of 638 nm was obtained in both cases, and the FWHM of 9.5 nm was obtained under the nonlinear condition, with 0.5 nm spectrum broadening over the linear result, which is less than 10% of the theoretical spectral resolution. Comparing the spectra indicates that the nonlinear OPD of the presented setup causes negligible spectrum broadening over the linear result.
This setup was used to detect the color picture shown in Fig. 11(a). The scene was illuminated by an incandescent lamp, and the exposure time of each interferometric image was 20 ms. Each channel of the system acquired 3000 interferometric images using the push-broom technique with the total signal acquisition time of 75 s. Figures 11(b) and 11(c) present four interferometric images each from the up-FOV and down-FOV, respectively. As a result of using the push-broom method, the scene is shifted from right to left between subsequent images. Image registration with an accuracy of 1/100 pixel was performed. The images after registration are presented in Fig. 12, in which the scene is motionless while the interference fringe pattern shifts from left to right. Then, the interferogram of point A in the scene was extracted, as illustrated in Fig. 13(a) and Fig. 13(b). The spectrum was reconstructed by performing Fourier transforms are shown in Fig. 13(c). The spectrum of point A was also acquired using a commercial spectrometer with Ocean Optics USB4000, as shown in Fig. 13(c). Comparing the reflection spectrum indicates that the measurement result of the presented method is quite consistent with the USB4000 result, which further demonstrates the measurement feasibility of the presented method. Four spectral images each of the up-FOV and down-FOV that were reconstructed by performing Fourier transforms are shown in Figs. 14(a) and 14(b), respectively. The wide-format spectral images obtained by stitching together the spectral images of the two FOVs are depicted in Fig. 15, and the reconstructed spectrum matches the colors of the scene.
The proposed large-FOV Fourier transform spectrometer offers some promising features. First, the system is energy-efficient. Unlike in a common-path system, the incident light in the dual-rectangle lateral shearing beam splitter is split by the first plane beam splitter into two beams that travel along different paths. Each beam passes the plane beam splitter only once, avoiding backtracking of the light in the system, and both channels exhibit throughputs as high as that in a common-path system.
In addition, the dual-rectangle structure provides good lateral shearing performance. The light is sheared by translating M3 and M4 together, and the parallelism of the shearing beams is affected only by the error of the angle between M3 and M4, and not by the orientations of the reflectors. The relatively fixed positions of M3 and M4 are well established, thereby reducing the optical alignment problems in the non-common-path system and promoting the practical applicability of the spectrometer.
The fact that the system includes two channels also increases the range of potential applications of this imaging spectrometer. Compared with conventional Fourier transform imaging spectrometers, a much wider spectral range can be detected if one channel is used in the visible range while another is simultaneously used in the infrared range, and a much wider FOV can be obtained by stitching together images obtained simultaneously using the two channels with different FOVs. In addition, the spectral and polarization information of a target can be detected concurrently by the different channels. These features mentioned above greatly improve the detection performance and efficiency of the imaging spectrometer.
Furthermore, the polarization requirements of the plane beam splitters are greatly reduced compared with those of the beam splitter in a Sagnac interferometer. The incident light goes through or is reflected by the beam splitter film twice in a Sagnac interferometer, so the polarization of the emerging light is sensitive to the reflectivity amplitudes of the polarized beams. A substantial difference between the amplitudes of the two sheared beams will arise if the reflectivity amplitudes of the s- and p-polarized beams are not equal, consequently reducing the visibility of the fringe pattern superimposed on the images and decreasing the SNR of the system. If the proposed imaging spectrometer works in single channel mode, while only the channel of up-FOV is used, the incident light goes through or is reflected by the beam splitter film only once in the dual-rectangle lateral shearing beam splitter, and the polarization of the emerging light is as described in Section 3.2, simplifying the polarization requirements of the plane beam splitter film coatings.
However, the beams in the proposed dual-rectangle lateral shearing beam splitter have optical paths longer than those in a common-path lateral shearing beam splitter, which reduces the optical path stability. The fringe pattern superimposed on the images becomes less visible and could even disappear due to optical path vibrations, and the interferograms of points extracted from the interferometric images are distorted, causing the reconstructed spectra to deviate from the actual spectra and vertical stripes to appear on the reconstructed spectral images, as shown in Fig. 14 and Fig. 15.
In conclusion, we propose a dual-channel FOV stitching Fourier transform imaging spectrometer based on a novel dual-channel lateral shearing beam splitter, namely, a dual-rectangle lateral shearing beam splitter. The dual-rectangle lateral shearing beam splitter is advantageous because no backtracking of light occurs and because it includes two channels. The light beams travel separately through the beam splitter and each pass the plane beam splitters only once, resulting in high energy efficiency. Compared with conventional Fourier transform imaging spectrometer, a much wider spectral range can be detected by using the two channels in different wavelength ranges simultaneously. Similarly, a much wider FOV can be obtained by using the two channels with different FOVs that can be stitched together, and targets’ spectral and polarization information can be detected concurrently by the two channels. All of these factors greatly improve the performance and efficiency of the proposed imaging spectrometer.
Influence of reflector angle errors on modulation
Light from point S of the scene will be separated into two coherent interfering beams and create two Airy disks in the imaging plane after passing through the lateral shearing beam splitter . The two Airy disks will not entirely coincide with one another if there is an error of ε in the angle between M3 and M4, as shown in Fig. 6. In this case, the angle between M3 and M4 is π/2+ε, while the expected angle is π/2. Assuming that the expected incident angles between the separated beams from the two FOVs and the imaging lens are and , the total error is 2ε.
S1 and S2 are the centers of the two Airy disks and are separated by distance s. The interference occurs only in the region in which the two Airy disks overlap. The influence of the reflector angle errors on the modulation can be expressed as
IA is the integral of the intensity distribution of each Airy disk and can be expressed asEq. (19) into Eq. (18), the following equation for the modulation can be obtained:
K is determined by the radius of each Airy disk and the distance between the centers of the Airy disks. The radii of the Airy disks formed by the two channels of this system are equal to each other, , and the distances between their centers areEq. (21) and Eq. (22) into Eq. (20), we obtain the following equations for the modulations of the two channels due to the reflector angle errors:
Aperture angle of the lateral shearing beam splitter and critical position of the plane beam splitter
In the most compact lateral shearing splitter configuration, which is illustrated in Fig. 16, the left endpoints of M3 and M4 are collinear with the right endpoint of M2. The vertical plane that contains the left endpoint of BS1 and the left endpoint of M1 is the incident plane of the lateral shearing beam splitter. The angles between the horizontal line and the reflectors are π/4.
The unfolded optical layout of the path reflected by M1 and M2 is depicted in Fig. 17. The angles between incident beams 1 and 2 and the horizontal lines are β1 and β2, respectively.
The following formulae should be satisfied to guarantee that the light can exit the lateral shearing beam splitter:Eq. (27) can be rewritten asFig. 17, and the endpoint D of BS1 is the critical position of BS1. This position can be expressed as
The unfolded optical layout of the path reflected by M3 and M4 is shown in Fig. 18. The angles between incident beams 1 and 2 and the horizontal lines are γ1 and γ2, respectively. M3 and M4 are translated up by dʹ.
The following formulae should be satisfied so that the beam can exit the lateral shearing beam splitter:Eq. (32) can be rewritten asFig. 18, and the endpoint O of BS1 is the critical position of BS1. LCI > LCF should be satisfied to ensure that two coherent interfering beams exit the lateral shearing beam splitter. This relationship can be written as . The maximum value of dʹ is H/7, and the largest value of d is 2H/7.
National Natural Science Foundation of China (NSFC) (61475072); Opening Project of Key Laboratory of Astronomical Optics & Technology (Nanjing Institute of Astronomical Optics & Technology, Chinese Academy of Sciences); Fundamental Research Funds for the Central Universities (30916014112-010).
References and links
1. Z. Malik, D. Cabib, R. A. Buckwald, Y. Garini, and D. Soenkeson, “A novel spectral imaging system combining spectroscopy with imaging-applications for biology,” Proc. SPIE 2329, 180–184 (1995). [CrossRef]
4. A. R. Korb, P. Dybwad, W. Wadsworth, and J. W. Salisbury, “Portable Fourier transform infrared spectroradiometer for field measurements of radiance and emissivity,” Appl. Opt. 35(10), 1679–1692 (1996). [CrossRef] [PubMed]
5. V. A. Self and P. A. Sermon, “Fourier transform infrared cell for surface studies at controlled temperatures and in controlled atmospheres with time resolution and spatial resolution,” Rev. Sci. Instrum. 67(6), 2096–2099 (1996). [CrossRef]
6. C. O. Davis, “A spaceborne imaging spectrometer for environmental assessment of the coastal ocean,” Proc. SPIE 2817, 224–230 (1996). [CrossRef]
7. R. L. Lucke, M. Corson, N. R. McGlothlin, S. D. Butcher, D. L. Wood, D. R. Korwan, R. R. Li, W. A. Snyder, C. O. Davis, and D. T. Chen, “Hyperspectral Imager for the Coastal Ocean: instrument description and first images,” Appl. Opt. 50(11), 1501–1516 (2011). [CrossRef] [PubMed]
9. A. F. H. Goetz, “Three decades of hyperspectral remote sensing of the Earth: A personal view,” Remote Sens. Environ. 113, S5–S16 (2009). [CrossRef]
10. A. H. Vaughan, “Imaging Michelson spectrometer for Hubble space telescope,” Proc. SPIE 1036, 2–14 (1989). [CrossRef]
12. P. Jacquinot, “The luminosity of spectrometers with prisms, gratings, or Fabry-Perot etalons,” J. Opt. Soc. Am. 44(10), 761–765 (1954). [CrossRef]
13. P. G. Lucey, K. Horton, T. Williams, K. Hinck, C. Budney, B. Rafert, and T. B. Rusk, “SMIFTS: A cryogenically-cooled spatially-modulated imaging infrared interferometer spectrometer,” Proc. SPIE 1937, 130–141 (1993). [CrossRef]
16. J. Craven-Jones, M. W. Kudenov, M. G. Stapelbroek, and E. L. Dereniak, “Infrared hyperspectral imaging polarimeter using birefringent prisms,” Appl. Opt. 50(8), 1170–1185 (2011). [CrossRef] [PubMed]
17. M. W. Kudenov, S. G. Roy, B. Pantalone, and B. Maione, “Ultraspectral imaging and the snapshot advantage,” Proc. SPIE 9467, 94671X (2015). [CrossRef]
19. M. Topping, J. Pfeiffer, A. Sparks, K. T. C. Jim, and D. Yoon, “Advanced airborne hyperspectral imaging system(AAHIS),” Proc. SPIE 4816, 1–11 (2002). [CrossRef]
21. S. K. Babey and C. D. Anger, “A compact airborne spectrographic imager (CASI),” in Proceedings of IEEE Conference on Geoscience and Remote Sensing Symposium (IEEE, 1989), pp. 1028–1031.
22. D. Loiseaux, A. Michel, C. Babolat, and Y. Delclaud, “MERIS camera optics development: particular processes for an original concept,” Proc. SPIE 2209, 252–261 (1994). [CrossRef]
23. B. Snijders and H. Visser, “Imaging spectrometer with a large field of view,” Proc. SPIE 2830, 331–340 (1996). [CrossRef]
24. A. Barducci, F. Castagnoli, G. Castellini, D. Guzzi, P. Marcoionni, and I. Pippi, “ALISEO on MIOSAT: an aerospace imaging interferometer for Earth observation,” in Proceedings of IEEE Conference on International Geoscience and Remote Sensing Symposium (IEEE, 2009), pp. II-464–II-467. [CrossRef]
25. A. Barducci, F. Castagnoli, G. Castellini, D. Guzzi, C. Lastri, P. Marcoionni, V. Nardino, and I. Pippi, “Developing a new hyperspectral imaging interferometer for earth observation,” Opt. Eng. 51(11), 111706 (2012). [CrossRef]
27. P. G. Lucey, M. Wood, S. T. Crites, and J. Akagi, “A LWIR hyperspectral imager using a Sagnac interferometer and cooled HgCdTe detector array,” Proc. SPIE 8390, 83900Q (2012). [CrossRef]
28. Y. Ferrec, J. Taboury, H. Sauer, P. Chavel, P. Fournet, C. Coudrain, J. Deschamps, and J. Primot, “Experimental results from an airborne static Fourier transform imaging spectrometer,” Appl. Opt. 50(30), 5894–5904 (2011). [CrossRef] [PubMed]
30. R. O. Green, “Lessons and key results from 30 years of imaging spectroscopy,” Proc. SPIE 9222, 92220B (2014). [CrossRef]
32. M. Q. Xue, B. Xiangli, and B. Q. An, “Optical systems of imaging interferometers,” Proc. SPIE 3482, 474–483 (1998). [CrossRef]