Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Rapid microscopy measurement of very large spectral images

Open Access Open Access

Abstract

The spectral content of a sample provides important information that cannot be detected by the human eye or by using an ordinary RGB camera. The spectrum is typically a fingerprint of the chemical compound, its environmental conditions, phase and geometry. Thus measuring the spectrum at each point of a sample is important for a large range of applications from art preservation through forensics to pathological analysis of a tissue section. To date, however, there is no system that can measure the spectral image of a large sample in a reasonable time. Here we present a novel method for scanning very large spectral images of microscopy samples even if they cannot be viewed in a single field of view of the camera. The system is based on capturing information while the sample is being scanned continuously ‘on the fly’. Spectral separation implements Fourier spectroscopy by using an interferometer mounted along the optical axis. High spectral resolution of ~5 nm at 500 nm could be achieved with a diffraction-limited spatial resolution. The acquisition time is fairly high and takes 6-8 minutes for a sample size of 10mm x 10mm measured under a bright-field microscope using a 20X magnification.

© 2016 Optical Society of America

1. Introduction

Spectral imaging combines spectroscopy with imaging, two widespread methodologies, thus generating advantages that cannot be obtained separately by imaging or spectroscopy alone. In biological and clinical studies, spectral imaging extends existing capabilities by enabling the simultaneous study of multiple features such as organelles and proteins qualitatively and quantitatively. Progress in imaging techniques has led to a broad range of applications throughout the sciences, including in the life sciences and pathology where there is a growing need to acquire very large images. Measuring a spectral image of such large images is however, demanding. To the best of our knowledge, a system that can measure large spectral images in a reasonable time does not exist.

The field of digital pathology has grown rapidly in the last few years and the ability to scan a pathological sample and process it digitally has significant advantages. These include the ability to store data and easily share it with experts, the ability to visualize the data in a user-friendly environment and advanced reporting capabilities. Furthermore, these advantages can be translated into improved diagnostic accuracy and patient safety. As sub-specializations of practitioners continues to develop, digital pathology has significant advantages [1]. Pathological samples are usually large, in the range of 1-6 cm2 thus requiring the use of whole slide imager (WSI) systems [2]. Digital pathology is advantageous not only for archiving and sharing samples, but also for enabling accurate and objective analyses of samples based on mathematical algorithms that use image and signal processing [3].

Among other applications, recent studies in pathology show that the spectral data can be used to distinguish between different types of cells or tissues such as healthy or cancer cells [4]. Spectral image analysis provides a new dimension to pathological analysis that cannot be observed by the human eye. This may be crucial in the case of a pathological analysis that is extremely difficult and demanding. It can save considerable time and assist the surgeon in determining tumor boundaries for instance during surgery [5].

However, measuring a spectral image is not straightforward. The biggest hurdle is that three-dimensional data, I(x, y, λ) need to be measured while using a 2D detector, such as a CMOS or CCD, a 1D detector or even a single point detector. Nevertheless, several methods have been developed for spectral imaging. One of these, which is presumably the simplest to operate, uses a set of band-pass filters mounted on a filter wheel placed in front of the camera. By taking a set of images while changing the filters, the spectral image is constructed one wavelength at a time. Another method uses a matrix of color filters that contains an array of transmission filters, one next to the other. When the array is attached to a camera sensor [6], each pixel is covered by a different filter so that each group of pixels on the sensor can measure the spectrum, and represents the smallest ‘pixel’ in the final image, similar to a RGB camera. Other systems use tunable acousto-optic [7] or liquid-crystal tunable filters [8].

Spectral imaging can also use a diffraction element such as a grating or prism, where the spectrum is scanned pixel-by-pixel or line-by-line [9]. Each method has its pros and cons that depend on the measurement parameters and the actual application such as acquisition time, spectral and/or spatial resolution [10–12]. A few different commercial spectral cameras are available that are based on these methods.

However, the spectral imaging method used in the system described below, is based on Fourier spectroscopy. In Fourier spectroscopy, the image passes through an interferometer that provides the interference pattern on the detector plane. The intensity from each pixel must be measured after passing through different optical path differences (OPD) so that an interferogram can be acquired (the intensity as a function of the OPD) for each pixel. The interferogram is actually the Fourier transform of the spectrum; hence the spectrum can be found by the inverse Fourier transform [13, 14]. One convenient method is to use a Sagnac interferometer [15] that belongs to the family of common-path interferometers. It can be shown that when a collimated beam enters the Sagnac interferometer (Fig. 1), an OPD is created, which is a linear function of the beam angle of entrance with respect to the optical axis:

l=Cθ
where l is the OPD, C is a constant that depends on the interferometer configuration and θ is the angle. Most modern microscopes are infinity corrected, which means that the light that originates at each point of the sample forms a collimated beam at the microscope exit port. In other cases, a lens can be used to transform the image to be infinitely corrected. In such a system, the set of beams that originate from the whole sample is translated into a set of collimated beams that travel at different angles after passing through the lens L1, as shown in Fig. 1. The system we describe here is based on placing a Sagnac interferometer at the infinitely corrected beam. Another lens (L2 in Fig. 1) is used for focusing the image on the array detector [16]. The purple and green represent the set of two collimated beams each of which originates from a different point in the sample. As shown, each is focused again to a single point on the detector array, although each of the beams goes through a different OPD, which results in an intensity that depends on the OPD. M1 and M2 are two mirrors and BS is a beam splitter that splits the light, so that each split beam travels in the opposite direction until they hit the beam splitter and merge to interfere on the detector.

 figure: Fig. 1

Fig. 1 The Sagnac interferometer implemented as part of a spectral imaging system. M1 and M2 are mirrors, L1 is a collimating lens, L2 is a focusing lens and BS is a beam splitter. The black line represents the optical axis. The on-axis beam (green lines) has the same path for the reflectance and transmittance arms of the interferometer and therefore creates a zero OPD. An off-axis beam (purple lines) along the x-axis has a different path for the transmittance arm (solid lines) and the reflectance arm (dashed lines); hence it creates a non-zero OPD. Since the angle of the entrance beam depends on the position of the point in the sample relative to the optical axis, each point in the sample along the x-axis has a different OPD.

Download Full Size | PDF

The SpectraView HyperSpectral imaging system (Applied Spectral Imaging) is based on the Sagnac interferometer. During the measurement, the sample is fixed in place, and the system always measures the same field of view (FOV) multiple times, while rotating the interferometer itself (elements M1, M2 and BS of Fig. 1 together) in between each two consecutive images. At each capture of an image, each pixel of the detector measures a different OPD that depends on the rotation angle [16, 17]. At the end of the acquisition process, the set of images contains the interferograms for all the pixels and the data are processed to provide the spectral image.

This method requires an optical setup, which in addition to the optics also contains a mechanical motor to rotate the interferometer and the controllers. In this setup, the sample cannot move during the acquisition. Therefore, if a large sample (larger than the FOV) has to be measured, the procedure is to acquire a spectral image of a single FOV, move the stage to the next FOV and acquire another spectral image, and so on until the whole sample is covered. This slows down the large image acquisition. The need to stop the sample and move it again to the next FOV many times can also cause the sample to shift, which will make it difficult to tile the individual images into one large image.

To achieve a much shorter acquisition time and to overcome these problems, we describe a new method for rapid measurements of large spectral images. It is based on Fourier spectroscopy designed for measuring very large samples that cannot be captured by a single FOV. For example, measuring a tissue section of 1cm x 1cm with a 20X objective and a camera with a chip area of 1cm x 1cm would require acquiring at least 400 consecutive spectral images. In contrast, the method we describe here can acquire the full spectral image during a continuous scan of the sample by using a motorized stage, without needing to stop at each FOV. It is therefore ideal for applications such as whole slide scanning and may be crucial for pathological applications [1].

The system is based on Fourier spectroscopy as described above (Fig. 1). Unlike existing systems, the interferometer itself has no moving parts. It remains fixed while the sample itself is continuously scanned by a microscope stage (Fig. 2) so that the camera, which is continuously capturing images along the scan, captures a different FOV of the sample each time. Because the Sagnac interferometer creates an OPD that depends on the entrance angle (Fig. 1), each pixel along the x-axis of the detector is measured at a different OPD (Fig. 2). As a result, during the sample scan, the light that comes from each point of the sample along the x-axis passes through a different OPD while it is being imaged. Images are captured continuously with a short exposure time so that there is a negligible smear of the sample points. The sample scan speed and detector frame rate are synchronized so that the position of each pixel on the camera is known for each captured image. At the end of the measurement, the intensities that were measured for each sample point through different OPDs are collected to form the interferogram (Fig. 3), which is then Fourier transformed to get the spectrum of each sample point. This method is highly compatible with microscopy systems that typically have a computer-controlled scanning stage. With such microscopes, the actual optical system consists of a rather small interferometer attached to the camera with no moving parts, motors or controllers. A similar optical concept was described for measuring the near infrared (NIR) spectral image in remote sensing [18] that emphasized the NIR spectral range, the remote sensing application and the light radiation parameters, but measuring large images at a high speed for a microscopy setup was not treated.

 figure: Fig. 2

Fig. 2 Illustration of the measurement procedure with the new system. Each strip of the image is measured while the sample is continuously moving at a constant velocity. The red box represents the area captured by the camera at three consecutive time-points. The sample travels a distance d between each two images. As a result, each point in the sample is measured a couple of time, but it is measured at different OPD that varies along the x-axis. The arrows pointing to the interferogram shown at the bottom describe this process. Therefore, for each point in the sample, an interferogram is formed by collecting data from different pixel on the camera in each captured image. The interferogram is then Fourier transformed to get the spectrum at each pixel.

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 Scheme of interferogram construction for the first k images. The captured images shown in Fig. 2 are shifted with respect to one another. The size of the shift in pixels, p, depends on the system calibration and acquisition conditions. The interferogram of each point in the sample is constructed by collecting data from different pixels that are shown along each of the red lines in the figure.

Download Full Size | PDF

2. Principles of Fourier spectroscopy

In a spectral imaging system based on Fourier spectroscopy, each point in the sample is collimated by a lens and passes through the interferometer. The collimated beam is split by the beam splitter (Fig. 1, BS) into two beams that travel in opposite directions. The beams travel along a different path that creates the OPD and they are focused again on the plane of the camera where they interfere (Fig. 1). Therefore, the measured intensity depends on the OPD as well as the total (integral over the spectrum) intensity of the relevant source point:

Id(l)=0.5[Iin(σ)dσ+Iin(σ)cos(2πlσ)dσ]
where l is the OPD, σ is the wavenumber, which is the natural unit in Fourier spectroscopy (σ = 1/λ) and Iin is the intensity that originates from the point on the sample. The first term on the right side of Eq. (2) is a constant that does not depend on the OPD and describes the total intensity. The second term is equal to the real part of the Fourier transform of the spectrum. Hence, by calculating an inverse Fourier transform, one can find the intensity as a function of the wavenumber, I(σ). This function describes the intensity as a function of energy, since E = hc∙σ where h is the Planck constant and c is the speed of light. It can also be translated into intensity as a function wavelength, I(λ), which is the spectrum.

The OPD l can only be measured in a limited range which is commonly symmetrical, OPDmaxlOPDmax. As shown in Eq. (2) the inverse Fourier transform can only be implemented in a limited OPD range, an effect that leads to limited spectral resolution. Mathematically, the inverse Fourier transform can be described as an infinite interferogram multiplied by a “window” function with a width of 2OPDmax. Using the convolution theorem, it leads to the actual Fourier transform of the spectrum convoluted by the sinc function. It demonstrates the limited spectral resolution that depends on the inverse of the maximal OPD. In addition, due to the oscillatory nature of the sinc function, it has side-lobes that can be misleading. It is a common practice in Fourier spectroscopy to remove them by using an apodization algorithm [13]. There are different apodization methods, in the spectral and the OPD domain. In the OPD domain, apodization is done by multiplying the interferogram with a function that smoothly reduces to zero at the edges of the measured interferogram. As a side effect, the apodization operation also broadens the spectrum to a certain degree. Different smoothing functions can be used that trade off smoothing the lobes on one hand and broadening the spectrum on the other. We adopted the Happ-Genzel apodization function, which is commonly used [19]:

HG(x)=0.54+0.46cos(πl/L)
where the points of the interferogram are in the range -L ≤ l ≤ L (L = OPDmax).

In practice, the spectral resolution can be compromised by the specific application. The system here has the advantage that the spectral resolution can be easily determined by optical alignment. During a measurement, each point of the sample is scanned across the horizontal axis of the camera, which is also the OPD axis. Therefore, OPDmax can be tuned to the optimal value. As will be explained below, the selected spectral resolution has an effect on the total acquisition time, and therefore the optimal level is important. When imaging a uniform monochromatic light at wavelength λ through the system, the image on the camera appears as a set of bright-dark strips along the scanning axis. The intensity at each pixel is Id(x,y)=0.5I(x,y)[1+cos(2πx/w)] where I(x,y) is the sample intensity at position (x,y) and w is the number of pixels per period (at wavelength λ) as determined by the optical settings. In order to optimize the spectral resolution, w can be tuned by aligning the relative angles of the interferometer elements.

Another data processing operation that is commonly performed prior to the inverse–Fourier transformation is zero filling. In this process, the number of points along the interferogram is extended by adding more points with value zero to the edges. As a result, the number of points in the discrete transform increases and the spectrum will have more points in the same spectral range (similar to the interpolation process). Therefore, the relevant features in the spectrum will appear with higher accuracy.

When the measured interferogram is symmetric, which is a common practice in Fourier spectroscopy, the inverse-Fourier transform only has real values. In practice, the interferogram is not perfectly symmetric and with the addition of noise, the inverse-Fourier transform is always a complex function that also includes imaginary values. There are different methods for obtaining the real spectrum normally by using phase correction. When the imaginary part is rather small as in our case, it is sufficient to calculate the absolute value of the inverse-Fourier transform, which by definition gives a real spectrum per pixel.

3. Measurement parameters and image reconstruction

3.1. Sampling frequency

As stated above, the acquisition principle of our system is based on measuring the sample multiple times while it is being scanned. As a result, each point of the sample is measured at many different pixels along the x-axis of the detector, and therefore at different OPDs. This information is sufficient for calculating the spectrum at each point, as explained in the previous sections.

According to the Nyquist sampling theorem [20], the sampling rate (density of points in the interferogram) should be at least twice as large as the highest frequency fmax [pixels/period] that exists in the measured spectrum. In the current system, the camera captures the images at a maximal frame rate of 140 frames per second (fps), which limits the maximal sample velocity. In other words, the distance d that each point in the sample travels on the camera array between two consecutive captured images should be shorter than half the size of one OPD period as measured for the shortest wavelength in the sample, or the maximal frequency d≤ fmax/2 (Fig. 2). Practically it is even better to sample four points per period. Sampling at a higher frequency (a shorter distance d) will lead to an unnecessarily longer acquisition time without adding relevant information.

Taking all these Fourier spectroscopy considerations into account, the acquisition procedure captures a series of N images with the camera, while the stage moves the sample at a velocity that ensures a distance d (Fig. 2) between consecutive images. It continues along a complete strip of the sample, regardless of length. To measure the whole sample along the other axis as well, more strips of the image are measured consecutively. In typical bright-field samples, where there is no need for high spectral resolution, 60-80 points per interferogram are sufficient.

3.2. Exposure time considerations

In order to shorten the acquisition time, the scanning velocity of the sample is crucial, as explained above. To achieve the highest possible speed, the images are captured ‘on the fly’ while the sample is moving at a constant speed along the measurement of each strip of the spectral image. As a result, the exposure time τ should be limited with respect to the scan velocity. Note that the exposure time τ is not determined by the camera frame-rate fcamera, as it can be selected separately in many modern cameras (τ ≤ 1/fcamera). To set the optimal exposure time, the maximal blurring parameter, s, of the “allowed” movement distance needs to be defined (in units of pixels) during the exposure time τ. The scanning velocity is therefore matched to the exposure time so that the stage motion blur is within the defined range. A reasonable value for s is 0.25-0.5 pixels, although even a full pixel blur may not be significant. This can be evaluated by measuring the width of the point spread function (PSF) of the system. With a 5.5 µm2 pixel size, a 20X objective lens with NA = 0.4 and λ=500nm, the width of PSF is d = 0.6∙λ/NA = 0.6∙500/0.4≈750nm and covers approximately 3 pixels (each pixel covers 5,500/20 = 275nm of the image). A similar number of pixels covered by the PSF is also found for higher magnification objective lenses. Therefore, a blur of a fraction of a pixel does not significantly affect the spatial resolution. Accordingly, the maximal allowed scanning velocity v is given by:

v=spxτ
where px is the width of a pixel after the microscope magnification (e.g. 275nm with the parameters described above).

For bright images where very short exposure times can be used (<100 microseconds), Eq. (4) gives very high stage velocity. Nevertheless, the scanning velocity v is also limited by the sampling frequency as described in section 3.1. This gives:

v=dfcamera
Therefore, the scanning velocity should be the smaller of these two (Eqs. (4), (5)) as determined by the acquisition software in our system.

Consider for example the following typical values for a bright field image: the distance between two consecutive images is p = 15 pixels, the pixel size at 20X magnification is px = 275 nm, blurring parameter s = 0.25 pixel, exposure time τ = 0.1 ms and frame rate fcamera = 150 fps. Equation (4) and (5) gives v = 687.5 µm/sec and v = 618.75 µm/sec respectively and the scanner velocity is set to 618 µm/sec which is still very high. At this speed, scanning an image of 10 mm x 10 mm should take ~6 min. The spectral image measured with these parameters contains about ~36,400 x 36,400 pixels, more than 1 giga-pixels for each wavelength in the spectral image, which will require a computer-storage capacity of more than 40 Gb for a spectral image with 40 points in the spectrum.

For bright images, the exposure time can be shorter, down to 10 µs, and by using faster cameras with a frame rate of 1000 fps (which already exist; e.g. Photonis xSCELL), the time for such a large image can be reduced to 1-3 minutes.

Note that for low-signal samples, such as fluorescent markers, the exposure time is much higher; hence, in order to prevent image-blur, the scanner velocity should be very low. For such samples, the concept of continuous motion is inefficient. For these samples, a stop-go mechanism could be considered, which can be implemented in our system.

As mentioned, the total measurement time would take about 6 minutes, which is remarkably short for measuring such a large spectral image. For comparison, we calculated the typical time it would take for measuring the same spectral image with an existing commercial system. One such system is the Vectra 3 system (Perkin Elmer, USA) [21], which is based on the Nuance spectral camera. The system can scan whole slides in RGB. However, as described in the product manual, the user can select specific ROIs and measure the spectral image in these regions. The typical acquisition time for a single spectral image of a bright-field sample is stated to be 12 seconds. To cover the full area of a 10X10 mm2 sample with a 20X magnification, at least 400 images would have to be acquired (this is an estimate based on the known size of a typical array camera). Therefore, neglecting the time spent moving the sample from one ROI to the other, the whole measurement would take at least 80 minutes. This is ~13 times slower than our system.

We are not aware of any other system that can measure giant spectral images.

3.3. Construction of the interferograms and spectra

Once the information is stored, the interferograms of all pixels are constructed, and then the spectrum for each pixel is calculated. The construction of the interferograms uses different algorithms for the first k images in each strip, where k is the number of points in the interferogram, and the rest of the images (each strip may contain thousands of images).

For the first k images, we construct a 3D array of images, where each image is shifted p pixels relative to the previous one, where p is equal to the distance d in units of pixels. Note that the interferograms for the left part of the image do not have all the required data (k points), and to ensure a spectral image of the full object, scanning should start so that the left-most object is not really at the edge of the camera FOV. This is also the case for the right part at the end of the scan, so the scan should end when the right-most object of the sample is captured by the left-most pixels of the camera.

The interferogram for each pixel is constructed from the intensities measured at this point along the images from 1 to k (Fig. 3, red line).

From now on, we drop the first image, add the next one (k + 1th) and get the interferogram for the next p pixel columns (Fig. 4). This process is repeated by dropping the 2nd image, adding the (k + 2th) one and so on until the Nth image has been added. After adding the Nth image, the pixels from the right of the overlap area do not have enough data to construct the interferograms; hence they are discarded.

 figure: Fig. 4

Fig. 4 Scheme of the interferogram construction for the rest of the images. Note that the actual interference fringes along the scanning axis are not shown.

Download Full Size | PDF

Note that as N increases, the relative part of the non-overlap pixels which is discarded grows smaller. After obtaining the array of all interferograms, an inverse Fourier transform that includes apodization, zero filling and phase correction is applied to each pixel, to calculate the spectrum for each pixel.

3.4. Reconstruction of an image

When the spectral image is ready, the must be reconstructed so that it can be displayed on the screen, as there is no way to present all the information contained in a spectral image. A convenient way is to integrate the spectrum of each pixel and find the total intensity for each pixel, which gives a gray-level image. According to Parseval’s relation [22], a similar gray-level image can be generated by integrating the interferogram of each pixel.

In addition, a color image can be calculated from the spectral image. To create the colored image, the spectrum needs to be converted to a triplet of red-green-blue (RGB) values. The conversion process involves the calculation of the overlap integrals of the spectrum I(λ) and the color matching functions; x¯(λ), y¯(λ) and z¯(λ), to get the tristimulus values of X, Y and Z from which the RGB values are calculated using a transformation matrix [23, 24]. As the actual measured spectrum is affected by the spectral response function of the system (mainly the camera, but also other optical elements), the color image may look different than when observing it visually through the microscope. This can also be corrected using color enhanced methods, such as a white-balance and a gamma correction.

3.5. Computational performance

The computation and storage of such large spectral images requires special attention. At this point, the system does not use any special hardware, and the calculation times and performance was not optimized. The steps of image processing that include the build-up of the interferogram, calculation of the FFTs and the reconstruction of the image are computation intensive when performed on a standard PC due to the sheer amount of data processed and stored per second. For example, a camera with 1024x1024 pixels with a frame rate of 100 frames/sec and an interferogram of 50 points would yield a data stream of ~105 MB/s of raw data. We expect the FFT operation to require a number of operations which is on the order ofO(k2nlog(2n))[numberofpixels/s], where k is the number of points in the interferogram and n is the number of points in the interferogram after zero-filling, according to state of the art implementations [25], resulting in ~120·109 floating point operations per second. Although this is a demanding speed, it can be done on pc platform, given an optimized procedure and a multi-core CPU. It may also require adding hardware such as GPU.

Currently the software is not optimized and it takes about 25μs for calculating the FFT and spectrum of a single pixel. As an example, it took approximately 35 minutes to calculate the spectral image shown in Fig. 8.

4. Calibration procedures

The system requires very few calibration procedures as described below. Some of these should be repeated for different microscope settings, such as the objective lens that is used for the measurement. The setting parameters can be saved for later use.

4.1. Spatial calibration

The purpose of this calibration is to find the exact spatial shift of the stage (in pixels) for a given set of control parameters sent to the stage. We found the stage to be very accurate in its absolute distance and the calibration procedure takes into account the pixel-size of the camera. We perform the calibration by capturing a bright object on a dark background, such as pinhole, at two different positions and calculating the cross-correlation between the two images to obtain the shift (in pixels) for both axes [26].

4.2. Spectral calibration

The Fourier transform, pre- and post-process, provides the intensities along the axis of the Fourier channels for each pixel, but these channels have to be precisely converted to the actual wavelength value. The natural calibration should be λ = 1/σ as mentioned above. Optical disturbances may slightly skew the wavelength dependence and we therefore use a polynomial function with three calibration parameters. The parameters are found by measuring at least three known narrow band-pass filters (e.g. 450, 550 and 650 nm) providing at least three equations for fitting the three unknowns. This provides a precise calibration with a maximal error of ~2 nm along the spectral range. The spectral resolution in Fourier spectroscopy varies along the wavelength axis, 1/OPDmax = Δλ/λ2, and also depends on the other pre- and post-processes. As mentioned below, OPDmax is determined by setting the fringes density along the camera scanning axis and as described below, a spectral resolution of 5-10 nm at 500 nm is easily achieved. The calibration process is fast, can be easily automated and yields highly reproducible results. It is illustrated below.

5. Results

The system was developed using a set of two mirrors, a beam splitter (Applied Spectral Imaging) and a CMOS camera (Lumenera Lt225 NIR) as shown in Fig. 1. The system is controlled by software written in our laboratory (C sharp) that controls the camera and the scanning stage (Prior ProScan II). The data are collected to SSD media and the spectral images are calculated at the end of the acquisition. It further reconstructs the color images and additional analyses are performed in MATLAB with another software package we wrote.

To evaluate system performance, we tested the spatial resolution, spectral resolution, image quality and reproducibility. We also measured very large images as shown below.

Testing the spatial resolution is of crucial importance because the principle of the system is based on collecting the information for each point of the sample from different pixels on the camera. This information was collected by capturing a sequence of images while the sample was continuously scanned. It is therefore important to assess the performance of this process since it may be vulnerable to spatial aberrations. We compared a ‘direct-mode image’, which is an image captured in a single camera frame while the beam splitter is removed from the interferometer. Therefore, this image has the highest quality that can be achieved with the given camera. This image was then compared to the intensity gray-level image that was calculated from the spectral image by integrating the intensity at the entire spectral range. For the sample we used a USAF-1951 resolution target measured with a 20X objective lens (NA = 0.4). The narrowest slits in our target (group #7, element #6) were 2.19 µm wide, and due to the diffraction limit, each slit had a Gaussian cross section. We compared the cross sections for the two major axes of the camera along the x-axis (scanning axis) and the y-axis and compare the full width at half maximum (FWHM) of the measured widths. Note that the FWHM was narrower than the real slit width, since it measured the width at half of the Gaussian height and not at the Gaussian base.

Figure 5 shows the results, where the resolution along the vertical axis (perpendicular to the stage motion) is the same for both the direct mode and the spectral image (FWHM of 1.43 µm for the spectral image and 1.42 µm for the direct mode image). In the horizontal direction (parallel to the scanning axis), we found a slight broadening of the FWHM from 1.57 µm (~6 pixels) in the direct mode image to 1.83 (~7 pixels) µm in the spectral image, which is a broadening of 1 pixel (rounded-up value).

 figure: Fig. 5

Fig. 5 Testing the spatial resolution of the system by measuring a USAF-1951 resolution target in two different ways: 1. In the ‘direct mode’ where we take the beam splitter out of the optical path and therefore measure a normal image and 2. A gray-level image calculated from the spectral image measured for the same sample. As can be seen, along the horizontal axis (which is the scanning direction), there is a slight broadening of the spectral image (magenta) relative to the direct mode image (green). In the vertical axis, the spectral (red) and the direct mode (blue) images have the same resolution. Markers represent the pixel intensities and the lines are the Gaussian fits.

Download Full Size | PDF

We also tested the spectral resolution and its precision, an important parameter in a spectral imaging system. We tuned the system to have dense interference fringes and the Fourier transform was performed by using 1024 points in the interferogram. We measured a set of narrow band-pass filters from 450 to 800 nm in steps of 50 nm (FWHM of 10 ± 2nm, Thorlabs). The peak wavelength and the FWHM for each of the filters were calculated by fitting a Gaussian to the spectrum (Fig. 6). We found a precision (peak position) of ~1 nm in the range of λ = 450 – 650 nm and up to 3 nm in the range of λ = 700 – 800 nm. The spectral resolution (calculated by deconvolving the measured FWHM and the real filter FWHM) was found to change from ~5 nm at λ = 450 nm to ~17 nm at λ = 800 nm. As mentioned above, Δλ is proportional to λ2, and therefore the results are expected and fit the expected spectral resolution very well in these measurement conditions.

 figure: Fig. 6

Fig. 6 Spectral measurements of a series of band-pass filters (450-800nm in steps of 50nm. FWHM of 10 ± 2nm). The measured peak wavelength and FWHM for each filter are shown in the plot. The spectral resolution (calculated by deconvolving the measured FWHM and the real filter FWHM) was found to change from ~5 nm at λ = 450 nm to ~17 nm at λ = 800 nm.

Download Full Size | PDF

We now describe two actual examples of spectral image measurement with the new system. We start with a spectral image that can be easily interpreted and then discuss the measurement of a very large pathological image.

Figure 7 shows a high spectral resolution image of a smartphone screen (Samsung Galaxy S) measured through a microscope with a 4X magnification (NA = 0.1). Each pixel of the active-matrix organic light emitting diode on the screen consists of three different diodes that emit red, green or blue and together appear to be a single color pixel to the eye. We created an image of the Bar-Ilan University logo (Fig. 7(A), upper inset) that consists of 25X25 pixels, where each part of the image is a different basic color; namely red, green or blue. This image was presented on the smartphone screen and imaged with the spectral imaging system as a 2605x2175 pixel image. This image is larger than the size of the camera FOV, so the smartphone screen had to be scanned during the measurement.

 figure: Fig. 7

Fig. 7 (A) The reconstructed RGB image from the spectral image measured from the image of the BIU logo (upper inset), as measured from a smartphone screen, scale bar is 500 μm. The image consists of 2605x2175 pixels, which is larger than the size of the camera and demonstrates the scanning capabilities of the system. The inset at the bottom right shows a zoom-in of a single green smartphone diode. The image reflects various quality parameters of the image, including the sharpness, uniformity, noise uniformity, lack of image distortion and uniformity of the spectral measurement. (B) The normalized spectra measured at different places on the image, denoted by numbers in (A). The circles are the measured data, solid lines are a smoothed function. These spectra were compared to the spectra from the same smartphone screen measured with a spectrometer and demonstrated excellent agreement.

Download Full Size | PDF

Figure 7(A) shows the reconstructed RGB image of the sample. The inset at the bottom-right corner shows a zoom-in image of a single green diode.

The normalized spectra of three pixels from the image marked as 1 (blue), 2 (green) and 3 (red) are shown in Fig. 7(B). These spectra were compared to the spectra measured from the same smartphone screen with a spectrometer (Avantes AvaSpec-mini). The spectra were found to have an identical spectral shape with a spectral shift that was less than 2 nm. It therefore validates the spectral accuracy of the system and the reproducibility.

Moreover, the image provides evidence of the qualities of the images produced with the system. These include:

  • 1. Distortion: All of the same type of LEDs were equally spaced and they spanned along straight lines, which means that the system did not distort the image format
  • 2. Sharpness: The spatial resolution was analyzed above. Figure 7 proves that the spatial resolution was uniform along the image. Note that each point of the sample was measured by different pixels along the scanning axis, while the sample was being scanned, and it was finally reconstructed. Therefore, by definition, if part of the image along the scanning axis is sharp, the whole image along the scanning axis should be sharp as well. The uniformity of the sharpness of the LEDs images ensures that the image sharpness is uniform along the whole image, regardless of length. We tested the spatial accuracy of the green LED distances by fitting a Gaussian function to the intensity cross section for some of the LEDs (N = 37) and found that the distance between the LEDs was 79 ± 0.1 pixels, and the FWHM was 8.5 ± 0.4 pixels.
  • 3. Noise: By comparing the noise level in the dark area of ~540X715 pixels, the intensity was found to be 6.9 ± 2.2, calculated from all the RGB channels. This is relatively low noise, and was uniform along the image.

Altogether, these quality factors confirm that the system is of high quality and that the principle of the system works well and does not lead to a reduction of the image quality factors expected for the camera and optical factors (objective lens and NA).

Finally, Fig. 8(A) shows a giant spectral image of a pathological bone marrow sample. The image size is 12,633 x 6,526 pixels (equal to ~36 camera FOVs) as measured using a 10X objective lens (NA = 0.3) on a transmission microscope (Olympus IX81). The sample size is ~7 mm X 3.5 mm. Although a color image is shown, RGB values were reconstructed from the spectral data at each pixel of the image. The bottom-left inset shows a zoom-in of a few separate cells as captured using a RGB camera (left) and from the spectral-reconstructed image (right). As can be seen, the information is comparable. The rectangle at the upper left shows the area that can be measured with a single FOV of the camera. This part of the image appears in Fig. 8(B) that also shows three spectra that were selected from the spectral image. They are marked with colored arrows. The image reconstructed from a few measured strips is very smooth, even though no special tiling algorithms were necessary. This is due to the accurate repeatability of the scanning stage.

 figure: Fig. 8

Fig. 8 (A) The reconstructed giant white-balanced RGB image (~13,000 x 6,500 pixels, 85 mega pixels) of a bone marrow tissue section sample of size ~7 mm X 3.5 mm. The inset at the bottom left shows a comparison of a small part of the few cells that were measured with a normal RGB camera image (left) and spectral-reconstructed image (right). The upper rectangular frame represents the area of a single FOV that can be captured by the camera. (B) Zoom-in on the framed area shown in (A). The inset shows the spectra of three points marked in the image by the colored arrows. Note the differences in the spectra. Except for the different intensities, there are different spectral features that can be shown in the spectral shape at certain ranges and the peak position.

Download Full Size | PDF

6. Discussion and conclusion

Spectral imaging provides information that cannot be replaced by capturing a three-component color image. Spectral imaging is already in use for different applications in a variety of areas such as the remote sensing, agriculture, industrial inspection, forensics and biomedical fields. Although imaging has progressed in the last decade, there are still challenges that must be met in terms of the development of an actual optical system, data analysis and data management. Here we presented a spectral imaging system based on a new optical concept that can measure the spectral images of very large samples, which cannot be observed by a single field of view of a camera.

We describe a new optical concept of a fairly compact system that has no moving parts, and can be used with any scanning mechanism platform. We demonstrated the system, its spatial and spectral performance, imaging properties and its capability to acquire very large spectral images. The system's key advantage is that the spectral resolution is flexible and can be set for the actual measurement without having to change anything in the optics. Therefore, it can acquire spectral images fast. For example, measuring a 1X1 cm2 sample with 20X magnification and 40 points in the spectral range of 400-800 nm takes approximately 6 minutes.

The advantages of the system are significant as long as the exposure time is short, say in the range of 10 ms or less. When a long exposure time are needed, which is typically the case for fluorescent samples where the exposure time is at least 50-100 ms, the concept we describe has no advantages, because a continuous scan of the sample would require an impractically slow scan speed. For such cases, it may be better to move the sample and stop the motion for each image being captured. These two concepts can be integrated into the same system without adding any other optical or scanning elements. Furthermore, like the case of whole slide imaging systems, scanning a sample with a large area normally requires re-focusing the sample along the scan. This mechanism has not yet been implemented in our current system. It could however be done in different ways, similarly to the implementation in WSI systems, but this is beyond the scope of this work.

We tested the validity of the system by measuring an image of a smartphone screen, as well as a pathological specimen. As digital pathology is expanding at a rapid pace thanks to the advent of whole slide imaging, we believe that this system can save significant time and provide pathological applications with important information that cannot be observed by the eye alone; hence improving patients’ healthcare in the long term.

Acknowledgment

This work was supported in part by Applied Spectral Imaging, Yokneam, Israel, the Israel Centers of Research Excellence (ICORE) grant 1902/12 and the Israel Science Foundation grant 51/12.

References and links

1. F. Ghaznavi, A. Evans, A. Madabhushi, and M. Feldman, “Digital imaging in pathology: whole-slide imaging and beyond,” Annu. Rev. Pathol. 8(1), 331–359 (2013). [CrossRef]   [PubMed]  

2. L. Pantanowitz, “Digital images and the future of digital pathology,” J. Pathol. Inform. 1(1), 15 (2010). [CrossRef]   [PubMed]  

3. A. Madabhushi, “Digital pathology image analysis: opportunities and challenges,” Imaging Med. 1(1), 7–10 (2009). [CrossRef]  

4. W. Huang, K. Hennrick, and S. Drew, “A colorful future of quantitative pathology: validation of Vectra technology using chromogenic multiplexed immunohistochemistry and prostate tissue microarrays,” Hum. Pathol. 44(1), 29–38 (2013). [CrossRef]   [PubMed]  

5. H. L. Fu, B. Yu, J. Y. Lo, G. M. Palmer, T. F. Kuech, and N. Ramanujam, “A low-cost, portable, and quantitative spectral imaging system for application to biological tissues,” Opt. Express 18(12), 12630–12645 (2010). [CrossRef]   [PubMed]  

6. J. M. Eichenholz, N. Barnett, Y. Juang, D. Fish, S. Spano, E. Lindsley, and D. L. Farkas, “Real-time megapixel multispectral bioimaging,” Proc. SPIE 7568, 75681L (2010). [CrossRef]  

7. E. S. Wachman, W. Niu, and D. L. Farkas, “AOTF microscope for imaging with increased speed and spectral versatility,” Biophys. J. 73(3), 1215–1222 (1997). [CrossRef]   [PubMed]  

8. D. N. Stratis, K. L. Eland, J. C. Carter, S. J. Tomlinson, and S. M. Angel, “Comparison of acousto-optic and liquid crystal tunable filters for laser-induced breakdown spectroscopy,” Appl. Spectrosc. 55(8), 999–1004 (2001). [CrossRef]  

9. M. B. Sinclair, J. A. Timlin, D. M. Haaland, and M. Werner-Washburne, “Design, construction, characterization, and application of a hyperspectral microarray scanner,” Appl. Opt. 43(10), 2079–2088 (2004). [CrossRef]   [PubMed]  

10. Y. Garini and E. Tauber, “Spectral imaging: methods, design, and applications,” in Biomedical Optical Imaging Technologies: Design and Applications, R. Liang, ed. (Springer, 2013).

11. Y. Garini, I. T. Young, and G. McNamara, “Spectral imaging: principles and applications,” Cytometry A 69(8), 735–747 (2006). [CrossRef]   [PubMed]  

12. M. A. Golub, M. Nathan, A. Averbuch, E. Lavi, V. A. Zheludev, and A. Schclar, “Spectral multiplexing method for digital snapshot spectral imaging,” Appl. Opt. 48(8), 1520–1526 (2009). [CrossRef]   [PubMed]  

13. R. J. Bell, Introductory Fourier Transform Spectroscopy (Academic, 1972).

14. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).

15. A. Barducci, D. Guzzi, C. Lastri, P. Marcoionni, V. Nardino, and I. Pippi, “Theoretical aspects of Fourier Transform Spectrometry and common path triangular interferometers,” Opt. Express 18(11), 11622–11649 (2010). [CrossRef]   [PubMed]  

16. Z. Malik, D. Cabib, R. A. Buckwald, A. Talmi, Y. Garini, and S. G. Lipson, “Fourier transform multi-pixel spectroscopy for quantitative cytology,” J. Microsc. 182(2), 133–140 (1996). [CrossRef]  

17. Y. Garini, M. Macville, S. du Manoir, R. A. Buckwald, M. Lavi, N. Katzir, D. Wine, I. Bar-Am, E. Schröck, D. Cabib, and T. Ried, “Spectral karyotyping,” Bioimaging 4(2), 65–72 (1996). [CrossRef]  

18. D. Cabib, A. Gil, M. Lavi, R. A. Buckwald, and S. G. Lipson, “New 3-5 μ wavelength range hyperspectral imager for ground and airborne use based on a single element interferometer,” Proc. SPIE 6737, 673704 (2007). [CrossRef]  

19. H. Happ and L. Genzel, “Interferenz-modulation mit monochromatischen millimeter-wellen,” Infrared Phys. 1(1), 39–48 (1961). [CrossRef]  

20. H. Nyquist, “Certain topics in telegraph transmission theory,” Trans. Am. Inst. Electr. Eng. 47(2), 617–644 (1928). [CrossRef]  

21. Vectra 3.0 Quantitative pathology Imaging system user's manual (Perkin Elmer, 2015).

22. I. N. Sneddon, Fourier Transforms (Courier Corporation, 1995).

23. B. J. Lindbloom, “Spectral Computation of XYZ”, http://www.brucelindbloom.com/index.html?Eqn_Spect_to_XYZ.html.

24. I. T. Young, J. J. Gerbrands, and L. J. Van Vliet, Fundamentals of Image Processing (Delft University of Technology, 1998).

25. M. Frigo and S. G. Johnson, “The design and implementation of FFTW3,” Proc. IEEE 93(2), 216–231 (2005). [CrossRef]  

26. M. A. Sutton, J. J. Orteu, and H. Schreier, Image Correlation for Shape, Motion and Deformation Measurements: Basic Concepts, Theory and Applications (Springer Science & Business Media, 2009).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 The Sagnac interferometer implemented as part of a spectral imaging system. M1 and M2 are mirrors, L1 is a collimating lens, L2 is a focusing lens and BS is a beam splitter. The black line represents the optical axis. The on-axis beam (green lines) has the same path for the reflectance and transmittance arms of the interferometer and therefore creates a zero OPD. An off-axis beam (purple lines) along the x-axis has a different path for the transmittance arm (solid lines) and the reflectance arm (dashed lines); hence it creates a non-zero OPD. Since the angle of the entrance beam depends on the position of the point in the sample relative to the optical axis, each point in the sample along the x-axis has a different OPD.
Fig. 2
Fig. 2 Illustration of the measurement procedure with the new system. Each strip of the image is measured while the sample is continuously moving at a constant velocity. The red box represents the area captured by the camera at three consecutive time-points. The sample travels a distance d between each two images. As a result, each point in the sample is measured a couple of time, but it is measured at different OPD that varies along the x-axis. The arrows pointing to the interferogram shown at the bottom describe this process. Therefore, for each point in the sample, an interferogram is formed by collecting data from different pixel on the camera in each captured image. The interferogram is then Fourier transformed to get the spectrum at each pixel.
Fig. 3
Fig. 3 Scheme of interferogram construction for the first k images. The captured images shown in Fig. 2 are shifted with respect to one another. The size of the shift in pixels, p, depends on the system calibration and acquisition conditions. The interferogram of each point in the sample is constructed by collecting data from different pixels that are shown along each of the red lines in the figure.
Fig. 4
Fig. 4 Scheme of the interferogram construction for the rest of the images. Note that the actual interference fringes along the scanning axis are not shown.
Fig. 5
Fig. 5 Testing the spatial resolution of the system by measuring a USAF-1951 resolution target in two different ways: 1. In the ‘direct mode’ where we take the beam splitter out of the optical path and therefore measure a normal image and 2. A gray-level image calculated from the spectral image measured for the same sample. As can be seen, along the horizontal axis (which is the scanning direction), there is a slight broadening of the spectral image (magenta) relative to the direct mode image (green). In the vertical axis, the spectral (red) and the direct mode (blue) images have the same resolution. Markers represent the pixel intensities and the lines are the Gaussian fits.
Fig. 6
Fig. 6 Spectral measurements of a series of band-pass filters (450-800nm in steps of 50nm. FWHM of 10 ± 2nm). The measured peak wavelength and FWHM for each filter are shown in the plot. The spectral resolution (calculated by deconvolving the measured FWHM and the real filter FWHM) was found to change from ~5 nm at λ = 450 nm to ~17 nm at λ = 800 nm.
Fig. 7
Fig. 7 (A) The reconstructed RGB image from the spectral image measured from the image of the BIU logo (upper inset), as measured from a smartphone screen, scale bar is 500 μm. The image consists of 2605x2175 pixels, which is larger than the size of the camera and demonstrates the scanning capabilities of the system. The inset at the bottom right shows a zoom-in of a single green smartphone diode. The image reflects various quality parameters of the image, including the sharpness, uniformity, noise uniformity, lack of image distortion and uniformity of the spectral measurement. (B) The normalized spectra measured at different places on the image, denoted by numbers in (A). The circles are the measured data, solid lines are a smoothed function. These spectra were compared to the spectra from the same smartphone screen measured with a spectrometer and demonstrated excellent agreement.
Fig. 8
Fig. 8 (A) The reconstructed giant white-balanced RGB image (~13,000 x 6,500 pixels, 85 mega pixels) of a bone marrow tissue section sample of size ~7 mm X 3.5 mm. The inset at the bottom left shows a comparison of a small part of the few cells that were measured with a normal RGB camera image (left) and spectral-reconstructed image (right). The upper rectangular frame represents the area of a single FOV that can be captured by the camera. (B) Zoom-in on the framed area shown in (A). The inset shows the spectra of three points marked in the image by the colored arrows. Note the differences in the spectra. Except for the different intensities, there are different spectral features that can be shown in the spectral shape at certain ranges and the peak position.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

l=Cθ
I d (l)=0.5[ I in (σ) dσ+ I in (σ)cos(2πlσ)dσ ]
HG(x)=0.54+0.46cos(πl/L)
v= spx τ
v=d f camera
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.