A snapshot Image Mapping Spectrometer (IMS) with high sampling density is developed for hyperspectral microscopy, measuring a datacube of dimensions 285 × 285 × 60 (x, y, λ). The spatial resolution is ~0.45 µm with a FOV of 100 × 100 µm2. The measured spectrum is from 450 nm to 650 nm and is sampled by 60 spectral channels with average sampling interval ~3.3 nm. The channel’s spectral resolution is ~8nm. The spectral imaging results demonstrate the potential of the IMS for real-time cellular fluorescence imaging.
©2010 Optical Society of America
Hyperspectral microscopy (HM) is a spectral imaging modality which can obtain a sample’s full spectroscopic information and render it in image form. It is a functional combination of a traditional high-resolution microscope and spectrometer. The motivation behind developing HM for biomedical applications comes from interest in the biological sample’s emission or reflectance spectrum, which contains important structural, biochemical or physiological information. The three dimensional datacube (x, y, λ) that HM can measure can be used either at the cellular level, e.g. for the discrimination of spectrally overlapped fluorophores [1,2], or at the tissue level, e.g. in vivo clinical diagnostics [3, 4].
Most current HMs need scanning to accomplish the acquisition of the datacube, which happens either in the spatial domain like hyperspectral confocal microscopy , or in the spectral domain such as with acoustic-optic tunable filter (AOTF) and liquid crystal tunable filter (LCTF) based spectral imagers [6, 7]. In confocal fluorescence microscopy, the scanning mechanism of HMs must trade-off between image signal-to-noise (S/N) ratio and photobleaching. For example, in two-photon excitation hyperspectral confocal microscopy, the log-log plot of fluorescence emission versus excitation power increases with a slope of ~2, while the photobleaching rate increases with a slope ≥3 . This means that the acquisition of high-contrast images commonly accompanies strong photo-bleaching, which is of particular concern in prolonged imaging experiments. Another problematic issue of scanning HMs is their low temporal resolution. For example to acquire a datacube of size 512 × 512 × 34 (x, y, λ), the available scanning rates that the state-of-art hyperspectral confocal microscopes can provide are 5 frames/second , however to capture high dynamic range images much slower scanning speeds may often be required. On the other hand, although AOTF- or LCTF- based spectral imagers can switch wavelengths very fast (<100 microseconds for AOTF , 50 ms for LCTF in the visible light range ), their intrinsic light throughputs are low (30% for AOTF in the visible light range ; 50% for LCTF in the red and NIR region, but ~15% in the blue light range ). In addition, wavelength-filtered scanning spectral imagers lose a factor of n in throughput when measuring n spectral channels. Although tunable filters attempt to shorten the total capture time by selectively acquiring spectral bands, or altering each channel’s exposure time as a function of wavelength so that time can be saved in bright channels, the low-throughput and sequential acquisition mode are intrinsic barriers that cannot be easily overcome, especially in multi-fluorophore imaging which can require at least 30 spectral channels to be captured . HMs with low temporal resolution are also not suitable for time-sensitive imaging experiments, such as the observation of fast-diffusing molecules  or the determination of temporally-resolved dynamic biological processes .
To overcome these limitations, snapshot HMs such as the Computed Tomography Imaging Spectrometer (CTIS) [14, 15], Coded Aperture Snapshot Spectral Imager (CASSI) [16, 17] and Image-Replicating Imaging Spectrometer (IRIS)  have been developed. Although CTIS and CASSI can acquire data in real-time, they cannot display the full-resolution datacube in real-time because of massive computational requirements. Besides, CTIS and CASSI suffer from many other problems, such as missing cone effect for CTIS and the requirement that the object be sparse in the gradient domain for CASSI. The IRIS is a compact spectral imaging device based on the generalization of the Lyot filter. It offers promise for real-time applications because image acquisition is direct without the need for complicated post-processing. However, the linear polarization requirement on its input light makes IRIS less applicable to those low-light level imaging applications whose emission signals exhibit a large degree of polarization anisotropy . In those cases, the theoretical maximum throughput that IRIS can reach is only 50%. In addition, the number of spectral channels that IRIS has demonstrated so far is 8. Acquiring a large number of spectral channels (such as in hyperspectral imaging) may present difficulties for the IRIS technique, such as blurring due to prism dispersion and the high cost and lack of availability of large Wollaston prisms.
Recently, a novel snapshot HM imager – Image Mapping Spectrometer (IMS) – was developed and implemented in fluorescence microscopy . The operating principle of IMS is based on the redirection and dispersion of image zones by a custom mapping mirror (termed image mapper) and a prism. One to one mapping correspondence is established between voxels in the datacube and pixels on a large format CCD camera. The spectral layers of the datacube can be easily extracted from the raw image by a simple real time remapping algorithm. Since the data in IMS is obtained directly, IMS requires little image post-processing and reconstruction. The acquisition and display of the datacube can be done simultaneously.
The previous proof-of-concept IMS prototype could measure a datacube of size 100 × 100 × 25 (x, y, λ) for a field of view (FOV) of 45 × 45 µm2 in a single integration period . However, it had only 100 spatial samples in a single direction because the mirror facet density on the image mapper was relatively low (6.25 facets/mm, facet width = 160 µm). The low spatial sampling made the prototype less applicable to imaging biological samples with high spatial information content. To improve the spatial sampling, narrower mirror facets (70 µm wide) and a modified fabrication method were adopted. As a result, the mirror facet density that can be fabricated was increased to 14.29 facets/mm (facet width = 70 µm), and a total of 285 mirror facets were produced on a 20 mm substrate . The information content that can be processed by this image mapper is nearly 10 times the previous one.
In this paper, the design and development of a high-sampling IMS system is reported. The acquired datacube is of size 285 × 285 × 60 (x, y, λ). In the case of maintaining the spatial resolution (0.45 µm) of a particular microscope objective (Zeiss EC Plan-Neofluar 40 × , N.A. = 0.75 objective), the system has a FOV over 100 × 100 µm2. The spectral range is from 450 nm to 650 nm and is sampled by 60 spectral channels with average ~3.3 nm sampling interval. The spectral sampling rate is a little over twice the number of resolvable spectral bands – 25 in our particular design – to satisfy the Nyquist sampling condition on the camera (See Section 2 in  for the relationship between datacube size and IMS design parameters). The channel’s average spectral resolution is approximately 200 nm/25 = 8 nm.
To evaluate the imaging performance of the high-sampling IMS, the line spread function (LSF) and the modulation transfer function (MTF) of the system were measured (see Section 4.1), and indicate diffraction-limited performance. A sample of triple-labeled bovine pulmonary artery endothelial cells (BPAEC) was imaged by the system with a 1s integration time (see Section 4.2). This is the first time that we present IMS’s imaging performance in cellular fluorescence imaging. The experimental results demonstrate that IMS can become an important spectral imaging modality in hyperspectral microscopy, and has potential for use in many real-time bio-imaging applications.
2. Instrument description
2.1 Optical layout
The optical layout of the high-sampling IMS system is shown in Fig. 1 . The fore-optics are provided by a Zeiss Axio Observer A1 microscope, equipped with an EC Plan-Neofluar 40 × , N.A. = 0.75 objective. The sample is placed on the microscope stage and epi-illuminated with a 120W X-cite arc lamp. A double port adapter is mounted on the microscope side port to provide two switchable imaging ports. An intermediate image is formed at one imaging port where the field stop of the IMS is located. A color camera (Lumenera infinity 2-1C) is mounted at the other imaging port to provide a direct imaging reference.
The intermediate microscopic image is first relayed onto a custom fabricated component (image mapper) by a 5 × magnification optical relay system (combination of Zeiss EC Epiplan-Neofluar HD 5 × Objective & 130 mm Zeiss Tube lens). The image mapper is a field remapping unit which consists of 285 micro-scale (width = 70 µm) mirror facets. Each mirror facet has a two-dimensional tilt angle (αi, βj), (in units of radians) with respect to the substrate plane and is grouped into periodic blocks (see  for image mapper specification and fabrication details). The N.A. of the incident light at the image mapper is 0.004, and the point-spread-function (PSF) is ~152 µm (λ = 500 nm). Since the width of mirror facets is 70 µm, the PSF is sampled by 2.2 mirror facets and the Nyquist sampling condition is therefore satisfied.
The reflected light from the image mapper is collected by the collecting lens (Olympus MVPLAPO 1 × , focal length = 90 mm) and directed into 25 corresponding pupils. A 4 × beam expander (Schneider Telecentric Lenses 1:4) is then used to magnify the pupils to match the diameter of the re-imaging optics. The magnified pupils are then dispersed by a prism (material: ZF6, 6° wedge angle) and re-imaged by an array of 25 lenses. A 5 × 5 pupil mask (Dia = 3 mm) is placed just before the reimaging lenses to prevent the leakage of light between adjacent pupils and thus to limit the system’s crosstalk level (see Section 2.2 for details). Each reimaging lens has a focal length of 115 mm, and consists of a positive doublet (F.L. = 25 mm, Dia = 6.25 mm, Edmunds Optics K32-305) and a negative doublet (F. L. = −12.5 mm, Dia = 6.25 mm, Edmunds Optics K45-420) (see Fig. 2 ). A Zemax model verified that the reimaging optical system is diffraction limited. The reimaging lens array is mounted on two custom designed plates, and the focal length of the complete assembly can be finely tuned by adjusting their separation. The raw IMS image is captured by a large format CCD camera (Apogee U16M, 4096 × 4096, pixel size: 9 μm × 9 μm).
2.2 Pupil array configuration vs. channel crosstalk
The pupil array configuration at the back focal plane of the collecting lens is determined by the 2D tilt angles of the mirror facets and N.A. of the reflected light from the mapper (see Fig. 3 ). The adjacent pupil distance and pupil diameter D can be calculated from geometrical optics as:
To avoid leakage of light into adjacent pupils, the pupils should not overlap, which means. However, due to diffraction at the image mapper, the diameter of each pupil in the array is usually larger than the value predicted by geometrical optics. To predict the specific pupil shape (which will be non-circular primarily due to diffraction in the direction perpendicular to the long axis of the mapper’s reflective facets), a diffraction model for the image mapper has been developed and used . The model suggests that when the Nyquist sampling condition is satisfied on the image mapper, the relation needs to be maintained in order to lower the system’s crosstalk level to 1%, while at the same time keeping the pupils as compact as possible. In addition, the current high-sampling image mapper has surface form error on the mirror facets, which leads to additional side lobes in the pupil shape and causes about 10% crosstalk . To reduce this effect, a 5 × 5 mask of 3 mm diameter apertures is placed immediately prior to the re-imaging lenses to prevent light from entering the neighboring pupils. The conjugate image of the pupil mask at the back focal plane of the collecting lens has a diameter of 0.75 mm, slightly larger than the pupil size of 0.72 mm predicted by geometrical optics. Measurements show that 6% crosstalk exists in the current system, a 4% drop from the previous report  which did not employ a pupil mask.
Each active pixel on the CCD camera is encoded with spatial and spectral information from the sample. In general, this mapping between each pixel on the detector array and each voxel in the datacube is described by a five-dimensional array – the mapping tensor. If the crosstalk is sufficiently small, then this 5D tensor characterizes a mapping which is approximately one-to-one: each voxel in the datacube is associated with a unique pixel on the detector array, and vice-versa. This reduces the dimensionality of the problem such that the mapping can be described by a single pair of matrices, each of the same size as the detector array.
In order to describe the mapping geometry, we establish a pair of coordinate systems. The datacube is indexed by a triplet of variables (x, y, k) termed the datacube coordinate (k is the wavelength index); pixels on the detector array are indexed by a pair of variables (x', y'), termed the camera coordinate (see Fig. 4 ). The image mapping principle of IMS can then be described in terms of these two coordinate systems. First, the object’s 3D datacube (x, y, k) is sliced into 2D x-planes by the image mapper. Each x-plane is associated with a dispersed mirror facet image (termed image line). Next, these 2D x-planes are mapped onto different regions of a large format CCD camera. Thus, the one-to-one correspondence between voxels in the datacube and pixels on the detector array can be described abstractly as:
We first rasterize the datacube so that all voxels can be indexed by a single variable m. For example, the datacube with;; and can be written as for . For the IMS described here, the datacube dimensions are so that m ranges from 1 to 4 873 500.
The calibration process involves determining two matrices. The first matrix M determines the index m connected with each pixel (x', y') on the detector array. For example, if M 100,100 has a value of 2000, then the value measured at pixel location (100,100) is mapped to voxel #2000 in the datacube. Similarly, another matrix S determines the system sensitivity connected with each pixel. That is, when viewing a uniformly illuminated monochromatic object with the IMS, and looking at the appropriate pixels on the detector array, we generally find that their values are not quite equal. If, for example, one voxel mapping to this k-plane of the datacube is measured to have a value of 500 counts while another is measured to have only 400, then the sensitivity matrix records these values, so that multiplication by their reciprocal produces a properly normalized estimate of the object irradiances.
The first step in the calibration involves illuminating the system with a narrowband, spatially uniform source – an elemental object (a k-plane in Fig. 4). For this we use a supercontinuum laser (Fianium SC 400-4) filtered with an acousto-optic tunable filter (AOTF) (bandwidth: 2 nm ~6 nm in the visible light range) and passed into an integrating sphere (Ocean Optics FOIS-1) whose exit port is then imaged by the IMS. For the full calibration data set, the central wavelength passed by the filter is swept from 450 nm to 650 nm in 1 nm steps. The location of an image line in the spectral spread direction is recorded and shown in Fig. 5 ; the width of each horizontal line indicates the spectral width (in nm) assigned to each of the 60 spectral channels. Note that in Fig. 5 the spectral sampling is denser in the blue light region than in the red light region because of the non-linear dispersion of the prism.
The raw data corresponding to an elemental object produces an “elemental image” (see Fig. 6 ), 60 of which are used to produce the full calibration data set. Within each elemental image, we can see Lx number of image lines, each one of which corresponds to a vertical column within the datacube. The calibration algorithm uses the elemental image to estimate the x’-coordinate of each image line. Next, using knowledge of the tilt angles in the mapping mirror design, the algorithm designates which column of the k-plane corresponds to a given image line. Performing this assembly process for each elemental image allows the assembly of the full datacube.
There are several complications which require the simple procedure above to be modified. First is the presence of image distortion, which warps the image lines into curves [see Fig. 7(a) ]. Thus, rather than simply locating the x’-position of a line, the presence of distortion requires one to fit a curve to the line image and interpolate the result onto the pixel grid [see Fig. 7(b) and 7(c)]. The second complication is the presence of mis-registration between the various sub-pupils (this can be seen in the elemental image in Fig. 6). As designed, the pupil array is distributed in a uniform 5 × 5 grid, as indicated in Fig. 3. However, the pupil positions cannot be controlled to subpixel accuracy due to manufacturing constraints, so the calibration process must be used to compensate for the change in position. To estimate this shift, all that is needed is to image a target which has a horizontal edge feature: locating this feature within a second set of elemental images (collected with the target inserted into the field of view) provides an accurate estimate of the shift.
The above procedure provides the complete mapping matrix M; the second matrix obtained during the calibration, S, can also be measured from the elemental images. This sensitivity matrix is needed because of variations in reflectance at different locations of the image mapper, spectral variation in transmission of the optics, and variations in detector responsivity, the measured irradiance of the remapped image. Thus, the estimated datacube can be described as the result of the mapping operator M applied to the irradiance values in the raw image normalized by the spectral sensitivities,
4. Imaging results
4.1 Line spread function (LSF) and modulation transfer function (MTF) measurement
The spectral resolution of the IMS system is equal to the full spectral range divided by the number of resolvable spectral bands, whose width is defined as the full-width-half-maximum (FWHM) of the line spread function (LSF) under monochromatic illumination. Theoretically, the number of resolvable spectral bands is proportional to number of 2D tilts (i.e. 25) in each periodic block of the image mapper, and for our particular design this proportionality constant is 1. However, due to image aberrations, the actual number achieved in practice is slightly smaller. To establish this figure experimentally, a 635 nm diode laser is used and guided into an integrating sphere (Ocean Optics FOIS-1) to obtain a uniform monochromatic light field. Line width is measured for all image lines in the raw image. The mean FWHM of the measured LSF at the image plane is ~2.49 pixels (22.41 microns) (see Fig. 8 ). The number of spectrally resolvable bands can be approximated as:
To evaluate the imaging performance of IMS, the MTF at the image plane is obtained by taking the Fourier transform of LSF (assuming that the MTF is a separable function in x and y). The measured result is shown in Fig. 9 , indicating performance very close to the diffraction limit.
4.2 Spectral imaging of triple-labeled bovine pulmonary artery endothelial cells (BPAEC)
To test the spectral imaging capability of the high-sampling IMS in cellular fluorescence microscopy, triple-labeled BPAE cells (Invitrogen FluoCells prepared slide #1 (Cat. no. F14780)) were chosen as a sample. Mitochondria are labeled with MitoTracker Red CMXRos; filamentous actin is labeled with Alexa Fluor 488 phalloidin; and nuclei are labeled with DAPI. A triple-band filter set (Chroma 61001, DAPI/FITC/TRITC) was used to separate the excitation and emission light. The direct reference image was captured by a color camera at the microscope side-port and is shown in Fig. 10(a) .
At an illumination level of 3.3 mW/mm2 at the sample plane, an 11-bit IMS raw image was captured with an integration time of 1s [see Fig. 10(b)]. A total of 27 out of 60 acquired spectral images are shown in Fig. 11 (see Media 1 for a scan through all acquired wavelengths). The acquired 3D 285 × 285 × 60 (x, y, λ) datacube is shown in Fig. 12 (see Media 2 for a rotation view of the 3D datacube). Note that there are empty gaps between the three component dyes in the datacube. These are created by the blocking bands of the filter set, not the IMS. The IMS itself has the capability to capture the full spectrum. Recent developments in full-spectrum fluorescence microscopy have offered alternative excitation/emission separation mechanisms which can replace traditional filter-based imaging [23, 24]. These developments can greatly benefit broadband spectral imagers such as the IMS in multi-fluorophore imaging, because the difficulty of obtaining multiple-band dichroic filters can be removed, and the problem of empty gaps in the spectrum can be avoided.
Since the optics used in the current system are off-the-shelf and throughput has not been fully optimized (see discussions in Section 5), the integration time here is at the level of one second to capture an 11-bit image. However, the integration time is expected to be significantly shortened if the system throughput is improved and higher energy illumination is adopted.
5. Conclusion and discussions
A snapshot high-sampling IMS has been constructed for hyperspectral microscopy applications. It can measure a 285 × 285 × 60 (x, y, λ) datacube in a single camera snapshot. Since the Nyquist sampling condition is satisfied on the image mapper, the spatial resolution of the initial microscope objective is maintained. When using a 40 × (N.A. = 0.75) Zeiss EC-Plan Neofluar objective on the microscope, the IMS has a FOV of 100 × 100 µm2 with 0.45 µm spatial resolution. The acquired spectrum is from 450 nm to 650 nm, and is sampled by 60 channels with average ~3.3 nm sampling interval. The actual number of resolvable spectral bands is measured to be approximately 24, close the design number of 25. The channel’s average spectral resolution is ~8 nm. The frame rate of the current high-sampling IMS system is limited by the camera’s readout speed (5 MHz due to its USB interface), so that it takes around 10 seconds to download full-frame data from camera to PC. Incorporation of cameras with faster interfaces, such as CameraLink, will enable IMS to reach 3-10 fps readout speed  while still maintaining its high-sampling resolution. The endoscopy version of IMS system which utilizes a CameraLink-based camera is now being built in our group .
Calibration procedures for the IMS are also presented in this paper. The key step is to establish the one-to-one correspondence between voxels in the datacube and pixels on the CCD camera. Since the current calibration is based on wide-field imaging (all the points in the FOV are calibrated simultaneously), the procedure is time efficient. Only 60 narrow-band elemental images are needed. Currently the calibration process does not provide crosstalk correction. In the previous work , a 10% crosstalk level was reported due to diffraction effects at the image mapper and form error arising during fabrication. To decrease this crosstalk level, a physical pupil mask was implemented in this system to constrain the light entering the reimaging optics. This step reduced the crosstalk level in the system to around 6%. Crosstalk can be further reduced by improving the current calibration procedure, so that the mapping relationship between voxels in the datacube and pixels on CCD camera will be described by a five-dimensional tensor. The procedure for the crosstalk correction calibration is currently being investigated in our group.
To evaluate the system’s spectral imaging capability, IMS was used for cellular fluorescence microscopy for the first time. A 3D full-resolution datacube was acquired and displayed at the same time (i.e. without a long delay for computational reconstruction). At an illumination level of 3.3 mW/mm2, an 11-bit raw IMS image was captured with a 1 second integration time.
The light throughput of the current high-sampling IMS system is measured to be around 20%. This relatively low figure is due to the fact that most of the system components are built with off-the-shelf optics, which are not optimized for light efficiency in the IMS. For example, our measurements indicate that 50% of light is lost at the 4 × beam expander, whose primary role is to fit the pupil array to the size of commercially-available reimaging optics. The system throughput is expected to be improved at least 3.5 times by adopting custom reimaging optics and eliminating the need for the beam expander.
One of the common problems in hyperspectral imaging is the compromise between the number of the spectral channels and the channel’s signal-to-noise ratio (SNR) . Without considering noise, more spectral channels often means more fluorophores in the sample can be spectrally unmixed (as long as the system is over-determined). However, for a fixed level of emission intensity, due to the detector and photon noise, the side-effect that usually accompanies an increase in spectral channel number is a decrease of each channel’s SNR, which can reduce the accuracy of spectral unmixing . In the current IMS system, a single channel’s readout noise is about 10.5 electrons, which provides a high dynamic range (80 dB) with a full well depth of 85,000 electrons. The camera’s pixel binning capability along the spectral direction offers further flexibility in tuning the spectral channel number to match the optimal set when a specific fluorophore combination is chosen for imaging. In the imaging result shown in Fig. 11, by maximizing the number of sampling channels, the IMS’s spectral imaging performance is demonstrated even in this extreme case. Note that many biomedical imaging problems are emerging which require fine spectral sampling over a broadband range such as in vascular imaging , retinal imaging  and oral cancer diagnostics . The IMS technique with its full spectral sampling capacity will have an edge in those applications because parallel acquisition enables both real-time 2D imaging (for qualitative feedback) and spectral detection (for quantitative assessment) simultaneously. Another fact that may help diminish the trade-off between the number of spectral channels and an individual channel’s SNR is the recent development of low-noise detector arrays, such as the sCMOS camera , whose readout noise is less than 2 electrons (close to EMCCD performance) even at very high frame rate (30 fps). Incorporation of such cameras into IMS will significantly increase single channel’s SNR (until the system is shot-noise limited), and whole system’s frame rate.
In summary, high-sampling IMS is an important spectral imaging modality developed for hyperspectral microscopy. It can capture 3D (x, y, λ) datacubes and in effect provide 60 spectral images via a single snapshot. The acquisition and display of a full-resolution datacube can be simultaneous, as the re-mapping procedure involves only a simple numerical operation. The spectral imaging results in a biological specimen demonstrate that IMS has significant potential to open up new areas of investigation in real-time bio-imaging applications.
This work is supported by the National Institute of Health under Grant No. R21EB009186.
References and links
3. T. Vo-Dinh, “A hyperspectral imaging system for in vivo optical diagnostics,” IEEE Eng. Med. Biol. Mag. 23, 40–49 (2007).
4. K. J. Zuzak, R. P. Francis, E. F. Wehner, J. Smith, D. Allen, M. Litorja, C. Tracy, J. Cadeddu, and E. Livingston, “Hyperspectral imaging utilizing LCTF and DLP technology for surgical and clinical applications,” Proc. SPIE, 7170–10 (2009).
5. C. Zeiss, Germany, “LSM 710 Product Brochure”. http://www.zeiss.com.
6. V. Ntziachristos, J. Ripoll, L. V. Wang, and R. Weissleder, “Looking and listening to light: the evolution of whole-body photonic imaging,” Nat. Biotechnol. 23(3), 313–320 (2005). [CrossRef] [PubMed]
7. R. Lansford, G. Bearman, and S. E. Fraser, “Resolution of multiple green fluorescent protein color variants and dyes using two-photon microscopy and imaging spectroscopy,” J. Biomed. Opt. 6(3), 311–318 (2001). [CrossRef] [PubMed]
9. ChromoDynamics, Inc., Orlando, FL, “HSi-300 hyperspectral imaging system data sheet”. http://www.chromodynamics.net/.
10. Cambridge Research and Instrumentation, Inc., Cambridge, MA, “VARISPEC liquid crystal tunable filters brochure”. http://www.cri-inc.com/
12. K. Ritchie, X. Y. Shan, J. Kondo, K. Iwasawa, T. Fujiwara, and A. Kusumi, “Detection of non-Brownian diffusion in the cell membrane in single molecule tracking,” Biophys. J. 88(3), 2266–2277 (2005). [CrossRef]
13. K. N. Richmond, R. D. Shonat, R. M. Lynch, and P. C. Johnson, “The critical oxygen tension of skeletal muscle in vivo,” Am. J. Physiol. 277(5 Pt 2), H1831–H1840 (1999). [PubMed]
14. B. K. Ford, C. E. Volin, S. M. Murphy, R. M. Lynch, and M. R. Descour, “Computed tomography-based spectral imaging for fluorescence microscopy,” Biophys. J. 80(2), 986–993 (2001). [CrossRef] [PubMed]
16. C. A. Fernandez, A. Wagadarikar, D. J. Brady, S. C. McCain, and T. Oliver, “Fluorescence microscopy with a coded aperture snapshot spectral imager,” Proc. SPIE 7184, 71840Z (2009). [CrossRef]
17. C. F. Cull, K. Choi, D. J. Brady, and T. Oliver, “Identification of fluorescent beads using a coded aperture snapshot spectral imager,” Appl. Opt. 49(10), B59–B70 (2010), http://www.opticsinfobase.org/abstract.cfm?URI=ao-49-10-B59. [CrossRef] [PubMed]
18. A. Gorman, D. W. Fletcher-Holmes, and A. R. Harvey, “Generalization of the Lyot filter and its application to snapshot spectral imaging,” Opt. Express 18(6), 5602–5608 (2010), http://www.opticsinfobase.org/abstract.cfm?URI=oe-18-6-5602. [CrossRef] [PubMed]
19. J. R. Lakowicz, Principles of Fluorescence Spectroscopy, (New York, Springer, 2006).
20. L. Gao, R. T. Kester, and T. S. Tkaczyk, “Compact Image Slicing Spectrometer (ISS) for hyperspectral fluorescence microscopy,” Opt. Express 17(15), 12293–12308 (2009), http://www.opticsinfobase.org/abstract.cfm?URI=oe-17-15-12293. [CrossRef] [PubMed]
21. R. T. Kester, L. Gao, and T. S. Tkaczyk, “Development of image mappers for hyperspectral biomedical imaging applications,” Appl. Opt. 49(10), 1886–1899 (2010), http://www.opticsinfobase.org/ao/abstract.cfm?URI=ao-49-10-1886. [CrossRef] [PubMed]
22. R. T. Kester, L. Gao, N. Bedard, and T. S. Tkaczyk, “Real-time hyperspectral endoscope for early cancer diagnostics,” Proc. SPIE 7555, 75550A (2010). [CrossRef]
24. J. Y. Ye, C. J. Divin, J. R. Baker, and T. B. Norris, “Whole spectrum fluorescence detection with ultrafast white light excitation,” Opt. Express 15(16), 10439–10445 (2007), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-15-16-10439. [CrossRef] [PubMed]
25. Imperx Inc, “IPX-16M3 data sheet”, http://www.imperx.com.
26. T. Zimmermann, “Spectral imaging and linear unmixing in light microscopy,” in Advances in Biochemical Engineering Biotechnology, T. Scheper, ed. (New York, Springer, 2005).
27. K. J. Zuzak, M. D. Schaeberle, E. N. Lewis, and I. W. Levin, “Visible reflectance hyperspectral imaging: characterization of a noninvasive, in vivo system for determining tissue perfusion,” Anal. Chem. 74(9), 2021–2028 (2002). [CrossRef] [PubMed]
29. R. A. Schwarz, W. Gao, C. Redden Weber, C. Kurachi, J. J. Lee, A. K. El-Naggar, R. Richards-Kortum, and A. M. Gillenwater, “Noninvasive evaluation of oral lesions using depth-sensitive optical spectroscopy,” Cancer 115(8), 1669–1679 (2009). [CrossRef] [PubMed]
30. Fairchild, Inc., Andor, Inc., and PCO, Inc., “sCMOS data sheet”. http://www.scmos.com/.