Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Compact Image Slicing Spectrometer (ISS) for hyperspectral fluorescence microscopy

Open Access Open Access

Abstract

An image slicing spectrometer (ISS) for microscopy applications is presented. Its principle is based on the redirecting of image zones by specially organized thin mirrors within a custom fabricated component termed an image slicer. The demonstrated prototype can simultaneously acquire a 140nm spectral range within its 2D field of view from a single image. The spectral resolution of the system is 5.6nm. The FOV and spatial resolution of the ISS depend on the selected microscope objective and for the results presented is 45 × 45µm2 and 0.45µm respectively. This proof-of-concept system can be easily improved in the future for higher (both spectral and spatial) resolution imaging. The system requires no scanning and minimal post data processing. In addition, the reflective nature of the image slicer and use of prisms for spectral dispersion make the system light efficient. Both of the above features are highly valuable for real time fluorescent-spectral imaging in biological and diagnostic applications.

©2009 Optical Society of America

1. Introduction

Fluorescence imaging is an indispensable tool for biological studies, especially in cellular research. Through the staining of cells with various fluorophores and imaging under a microscope, a large volume of color-coded processes can be quantitatively characterized, from chromosome dynamics [1] to gene expression [2]. The development of fluorescent probes has greatly improved cellular research by bringing high quantum yield fluorescent dyes and multiplexed staining methods into the field [3]. This has enabled researchers to investigate several organelles and their interactions in the same field of view (FOV) and at the same time with high contrast. As more and more fluorophores are developed in the visible light band, a central problem for multi-staining methods to solve is how to discriminate fluorescent probes with spectral peaks very close to each other. So in addition to the traditional requirements on spatial resolution, high spectral resolution is also necessary for imaging devices which target fluorescent imaging applications.

Hyperspectral fluorescence microscopy (HFM) is an emerging field based on hyperspectral or multispectral imaging concepts, which often borrow from remote sensing techniques [4,5]. Because of its high spectral resolution (less than 10nm), HFM has found many applications in spectral imaging of living cells [69]. HFM has also been used to discriminate the contributions of autofluorescence from exogenous fluorescent signals present in the sample [10]. Compared to only 3 spectral bands that can be obtained by the traditional RGB color cameras or multi-filter imaging, HFM has the capability to capture the whole fluorescent spectrum within its 2D FOV, and build a 3D datacube (x, y, λ) for multivariate data analysis. Such data can provide accurate information about fluorescent probe distributions and their relative concentration over the whole specimen.

On the other hand, HFM with high temporal resolution is gaining more importance in biological microscopy. This is because it can be used to capture transient scenes, which is often a critical requirement in cellular dynamics research. Unfortunately, most currently available HFMs need scanning which limits their temporal resolution. For example, hyperspectral confocal microscopy is a spatial scanning technique which can implement three-dimensional sectioning while providing spectral information. However, even state-of-art HFM/confocal systems can only acquire data at a rate up to 5 frames/s with 512x512 pixels [11]. HFM with an acousto-optic tunable filter (AOTF) or liquid crystal tunable filter (LCTF) is another technique based on spectral scanning [12,13], which can switch wavelengths very fast. For HFM with AOTF, switching times are typically less than 100 microseconds [14]; for HFM with LCTF, around 50ms in the visible light band, and ~150ms in NIR band [15]. However, there is a trade-off between the quantity of spectral bands captured and the total acquisition time. Plus, due to fairly poor throughput (the AOTF has transmission of ~30% in the visible light range [14]; LCTF can exhibit over 50% peak transmission for red and NIR light, but this number reduces to ~15% in the blue region [15]), these systems are not ideal candidates for fluorescent real-time imaging. In addition to the above HFM approaches, other scanning techniques include Fourier-transform imaging spectrometers [16] (scanning in phase space) and fiber Fabry-Perot arrays [17] (scanning in frequency). The scanning mechanism of these HFMs still decreases their temporal resolution and limits their potential use in real-time imaging. To fully utilize the potential information yielded by fluorescent probes in HFM, snapshot techniques are needed.

Currently, many snapshot techniques have been developed for hyperspectral imaging, such as aperture splitting [18], field splitting (by fibers [19] or lenslet arrays [20]), Computed Tomography Imaging Spectrometry (CTIS) [21], and Coded Aperture Snapshot Spectral Imaging (CASSI) [22,23]. Among these, CTIS and CASSI are particularly interesting due to their higher throughput and compact size which are both critical features for fluorescence microscopy. CTIS has already been demonstrated in fluorescence microscopy [24] and CASSI has just recently been used in real-time spectral imaging for remote sensing applications [25] but has yet to be tested in microscopy. CTIS utilizes a computer-generated-hologram (CGH) to map multiple projections of the 3D data tube (x, y, λ) onto a 2D detector array. After being processed by linear algebra reconstruction methods, spectra from every spatial position within the CGH’s two-dimensional FOV is collected. Although CTIS can provide spectral imaging of fast moving and/or low-light objects it suffers from many problems, including massive computational requirements and the missing cone effect. CASSI draws on the ideas of compressed sensing. The spatial modulation is brought in by a coded aperture, and is later transformed to spatial and spectral modulation in the undoing process. Then a multiscale reconstruction algorithm is employed to extract the spatial and spectral information from the mask-modulated intensity graph. However, this technique has limitations on spectrally resolving point sources. Beside these two, current aperture splitting and field splitting techniques also have defects. Aperture splitting is not light efficient, while field splitting by fibers or lenslet arrays is limited by size of their spatial sampling components.

In this paper, we present a novel snapshot HFM device – the Imaging Slicing Spectrometer (ISS). It can acquire the whole spectral information within its FOV via a single integration of an array detector. By directly imaging the remapped and dispersed image zones onto a CCD detector, the ISS system overcomes the CTIS and CASSI’s problems of computational reconstruction and resolution loss. The ISS acquires data directly with minimal post processing to build a 3D datacube.

Although the ISS concept has already been established in astronomical optics for over a decade [2629], because of the characteristics of imaging objects (galaxies, stars, etc.), astronomical ISS systems have relatively low spatial sampling (typically less than 60 image slices) [30]. No current astronomical ISS system can be simply modified and adapted for the demanding requirements of biological fluorescence microscopy. To the best of our knowledge, this is the first time that the image slicing concept has been implemented for high-resolution microscopy. Our ISS system can be easily coupled to any microscope system with an image output port. This prototype surpasses existing astronomical ISS systems in two aspects. First, the slicing component has been miniaturized. The width of the slicing component in the image slicer is quite small (160µm) compared to the FOV (25mm), enabling high spatial sampling of the object. Second, use of a single large-format CCD detector combined with grouped 2D slicing directions greatly simplifies the reimaging process. This is an important improvement since it allows the construction of a compact high-resolution system. The prototype realizes 100 × 100 × 25 sampling in the 3D datacube (x, y, λ), which corresponds to 0.45 microns and 5.6 nm resolution in spatial and spectral domains respectively. While the system presented here is a proof-of-concept device, if required, the instrument could be redesigned and built to a different specification to improve spatial and spectral resolution. The imaging results presented in Section 4 demonstrate the promising potential of the ISS system in HFM research.

2. General principle

The operating principle of the ISS system is shown in Fig. 1 . An acinus cell image [31] formed at the side port of a microscope is first reimaged onto the image slicer – a custom made redirecting mirror (for more details see Section 3.2.). The image slicer is composed of many long strip mirrors (called slicing components) which reorganize the image to provide optically void regions on a large format CCD image sensor. The slicing components of the image slicer have different two-dimensional tilt angles that reflect the sliced image zones into different directions. In Fig. 1 the image slicer is shown as a simplified 3D model which has only 8 slicing components, tilted in the direction (αi, βj) (i, j = 1, 2). However, the real image slicer used in the system has much more slicing components and tilt angles. Subsequently, a prism disperses the sliced image zones into their neighboring void regions. The sliced and dispersed image can be acquired in a single integration event on the CCD camera. This mapping method establishes a one-to-one correspondence between each voxel in the datacube (x, y, λ) and pixel on the CCD camera. Therefore it is possible to directly measure the distribution of light intensities in the object. The position-encoded pattern on the CCD camera contains the spatial and spectral information within the microscopic image, both of which can thus be obtained simultaneously. No reconstruction algorithm is required since the image data contains direct irradiance from the object. Simple image remapping is sufficient for image and data display.

 figure: Fig. 1

Fig. 1 The operating principle of the ISS system

Download Full Size | PDF

The dimensions of the data cube obtained in the ISS depend on the size of the CCD image sensor. This means that the total number of voxels cannot exceed the total number of pixels on the CCD camera. Therefore for a given camera, one can always increase the spatial sampling at the expense of spectral sampling, and vice-versa. For example, by using a 1024 × 1024 pixel camera, a datacube (x,y,λ) can be built either in the 256 × 256 × 16 format, or 512 × 512 × 4 (the first two numbers describe spatial sampling, and the third one is the spectral sampling). To realize higher spatial and spectral sampling simultaneously, a large format CCD camera should be selected. The largest scientific grade CCD camera currently available is 16 Megapixels which limits the ISS to a datacube maximum of 500x500x25. Consumer grade cameras have recently reached 25 Megapixel resolutions which further extends the possible resolution of the ISS in the future.

3. Instrument description

3.1 System setup

The ISS system is a universal platform device that can be coupled to many imaging modalities, such as microscopes, endoscopes and others. Initially focusing on fluorescence microscopy, we have constructed a prototype ISS system coupled to a Zeiss AX10 inverted microscope as the fore optics. A photograph of the prototype system is shown Fig. 2(a) , and the schematic layout is presented in Fig. 2(b). Specimens are placed on the microscope stage and illuminated by a 120W X-cite arc lamp. The fluorescent signal is collected by a Zeiss EC Plan-Neofluar 40 × /N.A. = 0.75 objective. The intermediate image is formed outside of the microscope side image port, co-located at the field stop of the ISS system.

 figure: Fig. 2

Fig. 2 ISS system setup. Fig. (a) is a photograph of the system. A switchable dual-port image relay is mounted on the microscope side port. One port is connected to the ISS system. The other can be used as a direct imaging port to provide a standard image or reference spectrum. Fig. (b) is the schematic layout. Light rays reflected from different tilted slicing components are labeled with different colors. Note that only tilts with respect to the y-axis are shown in the Fig..

Download Full Size | PDF

The intermediate image at the field stop is first re-imaged by a 10 × magnification image relay system (telecentric both in object and image space) onto a custom-fabricated image slicer. One role of this image relay system is to preserve the image resolution by matching the size of image PSF with that of slicing component. The other is to guarantee the strict telecentricity at the side of image slicer, which is the requirement of correct guidance of chief rays. The image slicer is a one-dimensional mirror array that has 25 different two-dimensional tilt angles (0°, ± 0.23°, ± 0.46° with respect to both x and y-axes) and can reflect zones of sliced image into 25 different directions. The total number of slicing components on the image slicer is 100, and each one has dimensions of 16mm × 160µm in length and width respectively (see Table. 1). In Fig. 2(b), only tilt angles with respect to the y-axis are shown, The redirected light is gathered by the collecting lens (130mm Zeiss Tube lens, N.A. = 0.033, FOV = 25mm), and forms 25 separate pupils at the pupil plane. A 5.56 × beam expander (Edmund Optics Gold series telecentric lenses 58258, FOV = 8mm) adjusts the pupil dimensions to match those of the re-imaging lens array optics. The magnified pupils are dispersed by a custom prism (material: SF4, 10° wedge angle, made by Towel optics) and then re-imaged onto a large format CCD camera (Apogee U16M, 4096 × 4096 pixels, 9 microns pixel size, RMS noise: 10.5 e-, Dark current: 0.13 e-/pixel.s) by a 5x5 array of re-imaging systems. Each re-imaging lens is composed of a 60mm F.L. positive achromatic doublet (Edmund Optics 47698, dia = 6.25mm) and a −12.5mm F.L. negative achromatic doublet (Edmund Optics 45420, dia = 6.25mm) to form a long focal length lens (F.L. = 350mm). Note that the ISS prototype presented here does not use the full CCD resolution. However this large image sensor will allow us to greatly improve the system resolution in future system development.

3.2 Design, fabrication and characterization of image slicer

3.2.1 Principle of a grouped 2D slicing design

The image slicer is the most critical component in the ISS system. It plays the role of slicing the field and redirecting the image zones into different pupils. The image slicer is comprised of multiple mirror facets called slicing components. The number of slicing components (M), the length of the slicing components in relation to the image PSF (N) (Note: The prototype is optics limited for the spatial resolution in this direction), and the number of tilt angles (L), determine the size N × M × L of the 3D (x,y,λ) datacube. Note that N is the total number of spatial data points in the x-dimension, M is the total number of spatial data points in the y-dimension, and L is the total number of spectral data points (λ). In order to obtain the required void regions in the undispersed image, the slicer is composed of repeating blocks of slicing components with L tilt angles as shown in Fig. 3(a) . For our prototype system there are 25 total tilt angles (i.e. 25 spectral bands) labeled in red with an accompanying arrow to indicate the direction of tilt. Each slicing component within the block re-directs a part of the image within that block to a unique location in the pupil (Fig. 3(b)) also labeled with the same red number as its corresponding tilt angle. The block is then repeated down the length of the slicer until all N slicing components in the x direction have been obtained. In this manner, each pupil only sees every Lth slice of the image corresponding to the same tilt angle (in both x and y directions). For our prototype system of 100x100x25, each lens in the re-imaging array only sees a total of 4 slices from the original image.

 figure: Fig. 3

Fig. 3 Pupil selection principle. Fig. (a) shows one of the image slicer’s repeating blocks, and Fig. (b) shows the corresponding pupil plane. In (a) the arrow in each slicing component represents the tilt direction (there is no arrow on slicing component 13 because it has no tilt) and the sequential number represents the slicing component index. Light reflected from each slicing component in this block will enter the corresponding pupil in (b). The dimensions of slicing components in the Fig. are scaled to show their features. In the prototype, the slicing component is 16mm in length (Y direction), and 160µm in width (X direction).

Download Full Size | PDF

As the system is telecentric at the slicers’ image side, the chief rays reflected by a specific slicing component in each block have the same reflection angle. After passing the collecting lens, the light associated with these chief rays will enter the same pupil corresponding to a specific block. In this way, the reflected light from an entire slicer block gets separated into different pupils. Each reimaging lens is then dedicated to only one tilt direction and reimages all associated image lines. The image is thus efficiently re-distributed for spectral separation without losing any light. Note that this is an important improvement over previous astronomical image slicers which reimaged every slice separately, instead of in a common pupil location which simplifies the design and makes it more compact for high resolution microscopy applications. Using this approach the spatial and spectral resolution must be balanced for both the N.A. and FOV of the collecting lens (i.e. etendue). For the square pupil configuration, (L/2)1/2 NAslicer≤ N.A.collect where N.A.collect is the N.A. of the collecting lens and NAslicer is the N.A. exiting the image slicer. The N.A.collect limits the spectral resolution of the system while the FOV determines the spatial resolution of the system (i.e. # of mirror facets in the FOV).

3.2.2 Fabrication of image slicer

The image slicer can be fabricated with several technologies, such as raster flycutting, micro-milling, or lithography [32,33]. For this prototype, the image slicer was made by raster flycutting on a Nanotech U250 Ultra-Precision Lathe (UPL), which can be later reproduced with molding for mass production. The Nanotech U250 UPL is a 4 axes (X, Y, Z and C) lathe which allows diamond turning, micro-milling and raster flycutting at high sub-micron precision and 1-10 nanometer scale roughness. The image slicer substrate used for the ISS prototype is made of high purity aluminum. The substrate was placed on a manual goniometer (Newport GON40-U, 0.0022° sensitivity) which was fixed to the base of Y axis of the UPL. A custom diamond flycutting tool made by Chardon Inc., (160µm tip width, 20.03° included angle) was mounted to the spindle of the lathe and rotates at a rate of 2800 rpm during the cutting process. The slicing components are designed to have tilts around the Z axis and the X axis in the machine coordinates (See Fig. 4 ). The tilt angles of the slicing components can be described by two parameters: The first one is the tilt angle α around the Z axis which is controlled by the depth of cut of the flycutting tool. And the second one is the tilt angle β around X axis which is controlled by the rotation angle of the goniometer.

 figure: Fig. 4

Fig. 4 Raster flycutting on Nanotech 250UPL. Red coordinate arrows indicate the X, Y, and Z axes of the machine. C axis is not used in raster flycutting mode.

Download Full Size | PDF

When cutting process begins, the rotation angle of goniometer is first set to 0.46° with respect to the horizontal plane. Then the flycutting tool smoothly slides over the surface along the X axis, removing materials along its path. After completing a single slicing component, the flycutting tool returns back and shifts a slicing component width unit (160 microns) in Z axis and starts another pass until all the slicing components in a block with tilt angle β = 0.46° (called a slicer section) are fabricated. Then the flycutting tool shifts a block width unit (4mm) in Z axis to cut the next block. By just shifting the origin’s Z coordinate the distance of a slicer section width (0.8mm) after each change of goniometer angle, the whole process is repeated for β = 0.46°, 0.23°, 0°, −0.23°, −0.46° sequentially. The preset parameters for the finished image slicer are shown in Table 1 .

Tables Icon

Table 1. Fabrication parameters for image slicer

3.2.3 Characterization of image slicer

A photographic picture of the fabricated image slicer prototype is shown in Fig. 5(a) and Fig. 5(b), and a three dimensional profile of a portion of the image slicer acquired with a white light interferometer is shown in Fig. 5(c).

 figure: Fig. 5

Fig. 5 The profile of image slicer. Fig. (a) and (b) are photographic pictures. In (a), the sliced Rice logo letters can be directly seen in the reflection direction. In (b), a quarter is placed as the reference to show the size of the image slicer (16mm × 16mm). Fig. (c) is a three dimensional picture of a portion of the image slicer obtained by Zygo white light interferometer.

Download Full Size | PDF

The tilt angle and surface roughness of each slicing component was measured using the Zygo NewView 5032 (Fig. 5(c)) white light interferometer. The accuracy of the tilt angle measurement of the interferometer is 0.01°. The tilt angles of 25 slicing components in a block were also measured. The measured value presented in Table 2 was obtained by averaging the five slicing components in a block with the same α or β expected value. Generally speaking, α is closer to the expected value than β. This is because β is controlled by the manual goniometer. For the current low sampling ISS prototype, the reimaging optics can be designed to accommodate these tolerances. But for a future high resolution ISS system stricter angle tolerances will be required and the manual goniometer may not be suitable anymore and should be replaced by a more precise motorized one.

Tables Icon

Table 2. Slicing component tilt angle measurement

To obtain the surface roughness data, a virtual mask is applied to make only one slicing component visible in the FOV (0.14mm × 0.11mm). Roughness is measured at three randomly picked up slicing components. An average of 6nm RMS value indicates their good surface quality. In Fig. 6, a 3D height profile of a slicing component is shown. Based on the high purity nature and surface smoothness, for example, the percent decrease in specular reflectance due to the surface roughness is only ~2% [34], the reflectivity of the slicing component is expected to be comparable to that of evaporated aluminum film, which is above 90% in the visible light range [34].

 figure: Fig. 6

Fig. 6 Slicing component surface height profile. The roughness data is obtained by the removal tilt. Surface roughness RMS value = 6 nm.

Download Full Size | PDF

3.3 Optical design of reimaging lenses

In this 100 × 100 × 25 (x, y, λ) ISS prototype, the reimaging lens sets are assembled using off-the-shelf achromatic doublets (60mm F.L. doublets and −12.5 F.L. doublets). They are arranged as 5 × 5 array pattern. Custom opto-mechanics were designed and fabricated to support these lenses (See Fig. 7 ). The opto-mechanics were black anodized to reduce any stray light effects in the system. The optical design for each reimaging lens set in the array was modeled using ZEMAX to verify diffraction limited performance over the full field of view (FOV) and spectral range.

 figure: Fig. 7

Fig. 7 Reimaging lenses and mount. Fig. (a) shows the photographic picture of the whole piece. There are 25 tubes inside this mount. Each tube holds a reimaging lens set. Fig. (b) gives the cross section view of a single tube. 60mm F.L. achromatic doublets are mounted at the back of the tube (facing the pupil), while −12.5mm F.L. achromatic doublets are mounted at the front of tube (facing the image plane). The F.L. of this reimaging lens set is 350mm.

Download Full Size | PDF

The FOV of a reimaging lens set is designed to be overlapped with adjacent lens sets to maximize the usable area of the CCD camera. As the whole slicer plate’s image has a square shape, while the FOV of the reimaging lens set is circular, there exist four void regions outside the slicer plate’s image but inside the FOV (See Fig. 8 ). Because of these void regions, the FOV of neighboring reimaging lens sets are allowed to overlap. This allows a fully utilized imaging area on the CCD camera.

 figure: Fig. 8

Fig. 8 Overlap of the FOVs on the CCD camera. Each reimaging lens set images the corresponding pupil in the pupil plane (see Fig. 3(b)). The FOVs of adjacent reimaging lens are overlapping to fully utilize the CCD area. The image slicer itself creates a field stop allowing the overlap.

Download Full Size | PDF

4. Imaging Results

In order to verify the image performance and test the spatial and spectral resolution of the ISS prototype, imaging experiments were performed in the following sequence: (1) the undispersed 1951 USAF resolution target was imaged and the PSF (point spread function) of a single slice image was measured; (2) spectra images of test samples made with fluorescent beads were obtained.

4.1 Image quality and PSF measurement of undispersed sliced resolution target

An image of an undispersed 1951 USAF resolution target was obtained by removing the prism from the instrument. The target is placed on the microscopic stage. The raw image was captured with a 16-bit CCD camera without binning (pixel size equal to 9µm). To calibrate the spatial features in the image, first a black bar across the whole FOV is chosen for imaging. Then, the image slices are aligned with each other manually to restore the straight bar feature. At the same time, the starting coordinates of each image slice on the camera are recorded. A “jigsaw puzzle” algorithm was developed to build the system realignment matrix. By imposing this matrix on the target raw data, the image can be reconstructed automatically. Note that image registration and distortion are allowed in the raw image because they will be calibrated out of the system during this process. To calibrate the image intensity, another reference blank image was obtained under uniform illumination and with no object on the microscope stage. After dividing the intensity data at each target image slice by the corresponding value at the reference image slice, the target image intensity is corrected.

The smallest feature of USAF target was moved to the center of FOV to test the image quality of the reconstructed image. The reconstructed result from the raw data (See Fig. 9(a) ) is shown in Fig. 9(b). The width of the top black bars presented in the Fig. is 2.19 µm. Figure 9(c) shows the direct image of the same bars at the microscope side port for comparison (captured by Lumenera Infinity1-1 monochromatic camera). The imaging results demonstrate that the prototype can image microscopic samples with contrast (Imax-Imin)/(Imax + Imin)≈0.5 for the 2.19 µm bars in Fig. 9(b), which corresponds closely to the direct imaging result. The PSF of the ISS was measured by analyzing the intensity distribution of a line that is across a single slice of the image (See Fig. 10 ). The FWHM is about 7 pixels, which is 63µm. The theoretical N.A. at the camera’s side is 0.005, corresponding to a diffraction spot size of 122µm (FWHM of diffraction limit is 61 microns). This suggests that the ISS system approaches the diffraction limit. The distance between the peaks of two adjacent undispersed slice images is also measured to be around 170 pixels. It indicates that about 24.3 spectral bands can be resolved within the void regions and is very close to the 25 spectral bands designed for.

 figure: Fig. 9

Fig. 9 A 1951 USAF target undispersed image. The raw image (a) is obtained using a 16-bit camera without binning (pixel size = 9µm). Fig. (b) is the reconstructed image. For comparison purposes, an image of the same bars is captured at the microscope side port directly using a monochromatic camera. The imaging result is shown in Fig. (c). The top bars in the FOV belong to Group 7, Element 6 (bar width = 2.19 µm).

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 The PSF of a single slice from an undispersed image. The camera pixel size equals 9µm. The x and y positions indicate the location of this slice image in the CCD camera’s global coordinates.

Download Full Size | PDF

4.2 Fluorescent Beads imaging experiment

4.2.1 Sample preparation

Before imaging, the fluorescent beads were uniformly suspended by vortex mixing and sonicating the suspension. One drop of suspension was added to 1ml buffered saline solution to be diluted. Then the diluted suspension was deposited onto a microscope slide and sealed by a coverslip. Two samples were prepared for the ISS imaging test. One contains only the green beads, while the other is a mix of yellow and red ones.

4.2.2 Imaging results

Images of the fluorescent beads are shown in Fig. 11 and Fig. 12 . Chroma filter sets (61001 and 31002) were used to select the excitation wavelength and separate the fluorescent signal from the excitation light. Filter set 61001 (DAPI/FITC/PI) was used for fluorescent imaging of the green beads; Filter set 31002 (TRITC/DiI/Cy3) was used for fluorescent imaging of the red & yellow mixed beads). The diameter of these fluorescent beads is around 2.5µm. The raw image was captured with the 16-bits camera under 4 × 4 binning working mode (binned pixel size equal to 36µm). Note that the magnification of the prototype is not optimized for the pixel size and therefore requires longer integration times. In the future the same CCD will be used for larger format, optimized ISS and we predict integration times to be 16 to 64 times shorter. Currently, however to fully utilize the dynamic range of the camera, the integration time is set to be 2s to obtain the spectrum of yellow & red beads, and 6s to acquire the spectra of green beads. Full dynamic range is used here because high contrast imaging is preferable in this experiment. It can provide a high contrast fluorescent beads image with maximum spatial and spectral resolution. Users can increase the imaging speed based on their own contrast preferences. For example, if 8 bits dynamic range is used instead of 16 bits, the integration time for the green fluorescent beads are expected to be about 20ms. Users can also use higher illumination level to increase the sample brightness to reduce the necessary integration time. The solid blue line in the spectrum diagram is obtained by the ISS prototype, while the red crosses dotted line is obtained by the Ocean Optic USB4000 spectrometer for comparison. Because this prototype is designed to demonstrate the principle, basic spectral calibration is carried out here. For initial spectral calibration, four wavelengths and their corresponding pixel locations were used. The first three wavelengths (500 nm, 550 nm, and 600 nm) come from narrow band filters (10nm bandwidth) placed in front of the microscope’s broadband halogen lamp. The fourth wavelength comes from the fluorescent bead’s peak spectrum as measured by the USB4000 spectrometer. Linear interpolation is used between these wavelength locations to estimate the other wavelength/pixel correspondences. The spectral shape of both measurements overlaps well and demonstrates the capability of the ISS prototype for performing fluorescent spectral imaging. Note that although the calibration is quite basic here, the spectral data obtained with ISS system matches very well with the reference spectrum. More precise and quantitive spectral data are expected to be obtained in future ISS systems when complete and thorough calibration procedures are adopted.

 figure: Fig. 11

Fig. 11 ISS images of green fluorescent beads. The raw image is obtained using a 16-bit CCD camera with 6s integration time. The bead’s spectrum is obtained from point A in the re-constructed image.

Download Full Size | PDF

 figure: Fig. 12

Fig. 12 ISS images of red and yellow fluorescent beads. The raw image is obtained using a 16-bit CCD camera with 2s integration time. The yellow bead’s spectrum is from point B in the re-constructed image and the red bead’s spectrum is from point C in the re-constructed image.

Download Full Size | PDF

5. Discussion and Conclusions

We have presented a proof-of-concept snapshot ISS for microscopic applications. The prototype is a compact and compatible instrument that can be coupled to many modalities, such as microscopes, endoscopes and other. It has realized 100 × 100 × 25 sampling in the 3D datacube (x, y, λ), corresponding to 0.45µm spatial and 5.6nm spectral resolution respectively when a 40 × /N.A. = 0.75 objective is used on the microscope. The FOV, spectral range, spatial and spectral resolution can be easily tuned to cater to the specific needs of biological imaging. In this prototype, an off-the-shelf Zeiss tube lens is used as the collecting lens (25 mm FOV, 0.033 N.A.). Such optics set a spectral sampling limit at 25. To build higher spectral sampling ISS systems, custom made and assembled lenses may be needed to be used to effectively collect and reimaging light onto the CCD camera.

The design and fabrication of the image slicer used in this prototype is an important improvement, compared to previous astronomical slicers. First, the width of slicing component used in this ISS system is only 160 microns which allows sampling at high density. Additionally the use of micro-slicers shrinks the volume of the entire system to a compact table-top or potentially smaller dimensions needed for biological applications. For future higher spatial sampling ISS systems, smaller width cutting tools will be adopted to fabricate even thinner slicing components. This will help the ISS to reach higher spatial resolution in a fixed FOV. Diffraction is an inevitable effect and should be seriously considered when small slicing components are used. When slicing components sample the diffraction spot at the image side, some reflected rays will deviate from geometrical optical path and leak into adjacent pupils on the pupil plane, which can lead to crosstalk between neighboring array images. Crosstalk might become one of major background noises and affect the accuracy of spectral measurement, especially for those stained samples emitting strong and weak fluorescent signals simultaneously. While this is a concern it can be compensated with cutting larger tilt angle slicing components and separating array components more. The theoretical and experimental analysis of crosstalk is currently being investigated in our group. A Fourier transform based diffraction analysis predicts the presented prototype experienced crosstalk of approximately 1.0% level. Experimental result shows this value is between 2% to 8%, which is obtained by measuring the ratio of the intensity of the ghost image to that of the primary image in the FOV of the same reimaging lenses set. Beside diffraction, other image artifacts include neighboring slice’s side walls shadowing other slices and slice edge eating effect, which are both coming from the depth of cut and the included angle of the cutting tool. These artifacts contribute little to the image quality in the current slicer configuration. For example, the edge eating percent is estimated to be under 7%; shadowing is negligible because the incident angles of light are less than the tilt angle of side walls. But they may become a serious problem when slicer with narrow width and bigger tilt angles are used for future higher resolution systems.

The building of the reimaging lens array to image separated pupils is another innovative point for this prototype. Grouping tilt angles allowed simplifying the re-imaging optics and overall system layout. The overlapped FOVs design enables efficient usage of the CCD’s imaging area. As mentioned before the final restriction for the sampling density of the datacube comes from the total number of imaging pixels on the CCD image sensor. Therefore the FOVs of each reimaging lens set should be efficiently arranged to fully utilize the imaging capacity of the camera.

The limitation for temporal resolution of current low sampling ISS prototype mainly comes from the integration time of fluorescent signals. In addition 4 × 4 binning mode is used on the camera to increase readout speed. But for future high sampling ISS system the binning mode will be disabled and the PSF to pixel ratio will be smaller. This means that the integration time can be significantly improved for the current CCD camera (we estimate 16 to 64 improvement for this specific image sensor).

In summary, the ISS system is a snapshot technique that can directly capture the spatial and spectral information within its FOV simultaneously. Limited post processing – re-mapping is sufficient as light intensity in each pixel is corresponds to position and wavelength in the object. ISS system has a promising perspective and has a great potential to become an important modality for biological fluorescent imaging which requires high spatial, spectral and temporal resolution at the same time.

Acknowledgment

This work is supported by the National Institute of Health under Grant No. R21EB009186. A patent on this prototype is currently pending. We would like to thank Vivian D. Mack for help with preparation of the fluorescent bead samples.

References and links

1. A. S. Belmont, “Visualizing chromosome dynamics with GFP,” Trends Cell Biol. 11(6), 250–257 (2001). [CrossRef]   [PubMed]  

2. S. M. Janicki, T. Tsukamoto, S. E. Salghetti, W. P. Tansey, R. Sachidanandam, K. V. Prasanth, T. Ried, Y. Shav-Tal, E. Bertrand, R. H. Singer, and D. L. Spector, “From silencing to gene expression: real-time analysis in single cells,” Cell 116(5), 683–698 (2004). [CrossRef]   [PubMed]  

3. M. A. Rizzo, and D. W. Piston, “Fluorescent Protein Tracking and Detection in Live Cells,” in Live Cell Imaging: A Laboratory Manual, D. Spector and R. Goldman, eds. (Cold Spring Harbor Lab Press, Cold Spring Harbor, NY, 2004).

4. F. A. Kruse, “Visible-Infrared Sensors and Case Studies,” in Remote Sensing for the Earth Science: Manual of Remote Sensing (3 rd ed.), Renz and N. Andrew, eds. (John Wiley & Sons, NY, 1999).

5. D. Landgrebe, “Information Extraction Principles and Methods for Multispectral and Hyperspectral Image Data,” in Information Processing for Remote Sensing, C. H. Chen, ed. (World Scientific Publishing Company, River Edge, NY, 1999).

6. T. Zimmermann, J. Rietdorf, and R. Pepperkok, “Spectral imaging and its applications in live cell microscopy,” FEBS Lett. 546(1), 87–92 (2003). [CrossRef]   [PubMed]  

7. Y. Hiraoka, T. Shimi, and T. Haraguchi, “Multispectral imaging fluorescence microscopy for living cells,” Cell Struct. Funct. 27(5), 367–374 (2002). [CrossRef]   [PubMed]  

8. V. L. Sutherland, J. A. Timlin, L. T. Nieman, J. F. Guzowski, M. K. Chawla, P. F. Worley, B. Roysam, B. L. McNaughton, M. B. Sinclair, and C. A. Barnes, “Advanced imaging of multiple mRNAs in brain tissue using a custom hyperspectral imager and multivariate curve resolution,” J. Neurosci. Methods 160(1), 144–148 (2007). [CrossRef]  

9. W. F. J. Vermaas, J. A. Timlin, H. D. T. Jones, M. B. Sinclair, L. T. Nieman, S. W. Hamad, D. K. Melgaard, and D. M. Haaland, “In vivo hyperspectral confocal fluorescence imaging to determine pigment localization and distribution in cyanobacterial cells,” Proc. Natl. Acad. Sci. U.S.A. 105(10), 4050–4055 (2008). [CrossRef]   [PubMed]  

10. D. M. Haaland, J. A. Timlin, M. B. Sinclair, M. H. V. Benthem, M. J. Matinez, A. D. Aragon, and M. W. Washburne, “Multivariate curve resolution for hyperspectral image analysis: applications to microarray technology,” in Spectral Imaging: Instrumentation, Applications, and Analysis, R. M. Levenson, G. H. Bearman, and A. Mahadevan-Jensen, eds., Proc. SPIE 2959, 55–66 (2003).

11. C. Zeiss, Germany, “LSM 510 META Product Brochure”. http://www.zeiss.com.

12. V. Ntziachristos, J. Ripoll, L. V. Wang, and R. Weissleder, “Looking and listening to light: the evolution of whole-body photonic imaging,” Nat. Biotechnol. 23(3), 313–320 (2005). [CrossRef]   [PubMed]  

13. R. Lansford, G. Bearman, and S. E. Fraser, “Resolution of multiple green fluorescent protein color variants and dyes using two-photon microscopy and imaging spectroscopy,” J. Biomed. Opt. 6(3), 311–318 (2001). [CrossRef]   [PubMed]  

14. ChromoDynamics, Inc., Orlando, FL, “HSi-300 Hyperspectral Imaging System Data Sheet”. http://www.chromodynamics.net/.

15. Cambridge Research and Instrumentation, Inc., Cambridge, MA, “VARISPEC Liquid Crystal Tunable Filters Brochure”. http://www.cri-inc.com/

16. Z. Malik, D. Cabib, R. A. Buckwald, A. Talmi, Y. Garini, and S. G. Lipson, “Fourier transform multipixel spectroscopy for quantitative cytology,” J. Microsc. 182(2), 133–140 (1996). [CrossRef]  

17. D. Y. Hsu, J. W. Lin, and S. Y. Shaw, “Wide-range tunable Fabry-Perot array filter for wavelength-division multiplexing applications,” Appl. Opt. 44(9), 1529–1532 (2005). [CrossRef]   [PubMed]  

18. S. A. Mathews, “Design and fabrication of a low-cost, multispectral imaging system,” Appl. Opt. 47(28), F71–76 (2008). [CrossRef]   [PubMed]  

19. H. Matsuoka, Y. Kosai, M. Saito, N. Takeyama, and H. Suto, “Single-cell viability assessment with a novel spectro-imaging system,” J. Biotechnol. 94(3), 299–308 (2002). [CrossRef]   [PubMed]  

20. A. Bodkin, A. I. Sheinis, and A. Norton, “Hyperspectral imaging systems,” U. S. Patent 20060072109A1 (2006).

21. B. K. Ford, C. E. Volin, S. M. Murphy, R. M. Lynch, and M. R. Descour, “Computed tomography-based spectral imaging for fluorescence microscopy,” Biophys. J. 80(2), 986–993 (2001). [CrossRef]   [PubMed]  

22. M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Opt. Express 15(21), 14013–14027 (2007). [CrossRef]   [PubMed]  

23. A. Wagadarikar, R. John, R. Willett, and D. J. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 47(10), B44–51 (2008). [CrossRef]   [PubMed]  

24. B. Ford, M. Descour, and R. Lynch, “Large-image-format computed tomography imaging spectrometer for fluorescence microscopy,” Opt. Express 9(9), 444–453 (2001). [CrossRef]   [PubMed]  

25. A. A. Wagadarikar, N. P. Pitsianis, X. Sun, and D. J. Brady, “Video rate spectral imaging using a coded aperture snapshot spectral imager,” Opt. Express 17(8), 6368–6388 (2009). [CrossRef]   [PubMed]  

26. L. Weitzel, A. Krabbe, H. Kroker, N. Thatte, L. E. Tacconi-Garman, M. Cameron, R. Genzel, L. E. Tacconi Garman, M. Cameron, and R. Genzel, “3D: The next generation near-infrared imaging spectrometer,” Astron. Astrophys. Suppl. Ser. 119(3), 531–546 (1996). [CrossRef]  

27. S. Vivès and E. Prieto, “Original image slicer designed for integral field spectroscopy with the near-infrared spectrograph for the James Webb Space Telescope,” Opt. Eng. 45(9), 093001 (2006). [CrossRef]  

28. F. Henault, R. Bacon, R. Content, B. Lantz, F. Laurent, J. Lemonnier, and S. Morris, “Slicing the universe at affordable cost: the quest for the MUSE image slicer,” Proc. SPIE 5249, 134–145 (2004). [CrossRef]  

29. J. A. Smith, “Basic principles of integral field spectroscopy,” N. Astron. Rev. 50(4-5), 244–251 (2006). [CrossRef]  

30. F. Laurent, F. Henault, E. Renault, R. Bacon, and J. Dubois, “Design of an Integral Field Unit for MUSE, and Results from Prototyping,” Publ. Astron. Soc. Pac. 118(849), 1564–1573 (2006). [CrossRef]  

31. “Mechanisms of 3D intercellular signaling in mammary epithelial cells in response to low dose, low-LET radiation: Implications for the radiation-induced bystander effect,” Biological Sciences Division Research Highlights, Pacific Northwest National Laboratory (2004). http://www.pnl.gov/

32. W. Preuss and K. Rickens, “Precision machining of integral field units,” N. Astron. Rev. 50(4-5), 332–336 (2006). [CrossRef]  

33. C. M. Dubbeldam, D. J. Robertson, D. A. Ryder, and R. M. Sharples, “Prototyping of Diamond Machined Optics for the KMOS and JWST NIRSpec Integral Field Units,”, ” in Optomechanical Technologies for Astronomy, E. Atad-Ettedgui, J. Antebi, D. Lemke, eds., Proc. SPIE 6273, 62733F (2006).

34. H. E. Bennett, J. M. Bennett, and E. J. Ashley, “Infrared Reflectance of Evaporated Aluminum Films,” J. Opt. Soc. Am. 52(11), 1245–1250 (1962). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 The operating principle of the ISS system
Fig. 2
Fig. 2 ISS system setup. Fig. (a) is a photograph of the system. A switchable dual-port image relay is mounted on the microscope side port. One port is connected to the ISS system. The other can be used as a direct imaging port to provide a standard image or reference spectrum. Fig. (b) is the schematic layout. Light rays reflected from different tilted slicing components are labeled with different colors. Note that only tilts with respect to the y-axis are shown in the Fig..
Fig. 3
Fig. 3 Pupil selection principle. Fig. (a) shows one of the image slicer’s repeating blocks, and Fig. (b) shows the corresponding pupil plane. In (a) the arrow in each slicing component represents the tilt direction (there is no arrow on slicing component 13 because it has no tilt) and the sequential number represents the slicing component index. Light reflected from each slicing component in this block will enter the corresponding pupil in (b). The dimensions of slicing components in the Fig. are scaled to show their features. In the prototype, the slicing component is 16mm in length (Y direction), and 160µm in width (X direction).
Fig. 4
Fig. 4 Raster flycutting on Nanotech 250UPL. Red coordinate arrows indicate the X, Y, and Z axes of the machine. C axis is not used in raster flycutting mode.
Fig. 5
Fig. 5 The profile of image slicer. Fig. (a) and (b) are photographic pictures. In (a), the sliced Rice logo letters can be directly seen in the reflection direction. In (b), a quarter is placed as the reference to show the size of the image slicer (16mm × 16mm). Fig. (c) is a three dimensional picture of a portion of the image slicer obtained by Zygo white light interferometer.
Fig. 6
Fig. 6 Slicing component surface height profile. The roughness data is obtained by the removal tilt. Surface roughness RMS value = 6 nm.
Fig. 7
Fig. 7 Reimaging lenses and mount. Fig. (a) shows the photographic picture of the whole piece. There are 25 tubes inside this mount. Each tube holds a reimaging lens set. Fig. (b) gives the cross section view of a single tube. 60mm F.L. achromatic doublets are mounted at the back of the tube (facing the pupil), while −12.5mm F.L. achromatic doublets are mounted at the front of tube (facing the image plane). The F.L. of this reimaging lens set is 350mm.
Fig. 8
Fig. 8 Overlap of the FOVs on the CCD camera. Each reimaging lens set images the corresponding pupil in the pupil plane (see Fig. 3(b)). The FOVs of adjacent reimaging lens are overlapping to fully utilize the CCD area. The image slicer itself creates a field stop allowing the overlap.
Fig. 9
Fig. 9 A 1951 USAF target undispersed image. The raw image (a) is obtained using a 16-bit camera without binning (pixel size = 9µm). Fig. (b) is the reconstructed image. For comparison purposes, an image of the same bars is captured at the microscope side port directly using a monochromatic camera. The imaging result is shown in Fig. (c). The top bars in the FOV belong to Group 7, Element 6 (bar width = 2.19 µm).
Fig. 10
Fig. 10 The PSF of a single slice from an undispersed image. The camera pixel size equals 9µm. The x and y positions indicate the location of this slice image in the CCD camera’s global coordinates.
Fig. 11
Fig. 11 ISS images of green fluorescent beads. The raw image is obtained using a 16-bit CCD camera with 6s integration time. The bead’s spectrum is obtained from point A in the re-constructed image.
Fig. 12
Fig. 12 ISS images of red and yellow fluorescent beads. The raw image is obtained using a 16-bit CCD camera with 2s integration time. The yellow bead’s spectrum is from point B in the re-constructed image and the red bead’s spectrum is from point C in the re-constructed image.

Tables (2)

Tables Icon

Table 1 Fabrication parameters for image slicer

Tables Icon

Table 2 Slicing component tilt angle measurement

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.