Understanding the complexity of cellular biology often requires capturing and processing an enormous amount of data. In high-content drug screens, each cell is labeled with several different fluorescent markers and frequently thousands to millions of cells need to be analyzed in order to characterize biology’s intrinsic variability. In this work, we demonstrate a new microlens-based multispectral microscope designed to meet this throughput-intensive demand. We report multispectral image cubes of up to 1.30 gigapixels in the spatial domain, with up to 13 spectral samples per pixel, for a total image size of 16.8 billion spatial–spectral samples. To our knowledge, this is the largest multispectral microscopy dataset reported in the literature. Our system has highly reconfigurable spectral sampling and bandwidth settings, and we have demonstrated spectral unmixing of up to six fluorescent channels. This technology has the potential to speed up drug discovery by alleviating the imaging bottleneck in image-based assays.
© 2015 Optical Society of America
Fluorescence microscopy enables the visualization and resolution of features inside cells that would otherwise be invisible. Several distinct cellular structures—each tagged with its own spectrally unique fluorescent marker—can be labeled in a single biological assay [1,2]. In simple cases, optical filters can isolate signals from different fluorophores, provided that there is little spectral overlap between species. However, organic fluorophores often emit over a wide spectral range, making spectral overlap inevitable when three or more fluorophores are employed. These highly multiplexed assays require many spectral samples in addition to computational unmixing algorithms [3–5].
In high-content imaging assays, the use of a large number of fluorescent probes yields dramatically more information about the cellular response to the chemical compound or condition under study . However, increasing the number of detection channels, which are detected sequentially, results in an increased image acquisition time and a reduced detection efficiency, as narrowband filters block much of the incident light.
The process of acquiring spatially and spectrally dense information about a sample is called multispectral imaging. In contrast to fluorescence microscopes that use filter sets, multispectral systems are designed to efficiently acquire spectral images with the help of a dispersive element such as a prism or a grating. Multispectral systems are of particular interest for high-content imaging because they parallelize imaging of many fluorophores, thus speeding up image-based assays.
Many multispectral imaging techniques exist, each with its own set of trade-offs. Pushbroom scanning and variants thereof are common multispectral imaging architectures that read out the spectra for one line of pixels of an image at a time [7–9]. This is a major speed trade-off, as each line needs to be read out sequentially. Compounding the problem is that light from each spatial point is dispersed over many pixels in a line, increasing the role of read noise and necessitating an even longer integration time. Whiskbroom imagers are similar to pushbroom systems, but they read out the spectrum from only one pixel at a time rather than an entire line. These systems are even slower than pushbroom imagers and are typically implemented in conjunction with confocal microscopes . Single-shot multispectral imagers are yet another class of multispectral systems. One example uses an image-splitting optic followed by a filter array or image-mapping optics to discriminate between discrete spectral channels [11,12]. The filter-based approach is inherently photon inefficient, while both systems sacrifice field of view (FOV) because the sensor is split up into several spectral channels. Another type of snapshot multispectral imager uses a coded aperture and computational techniques to recover spectral information [7,13–15]. In principle, these and other computational systems take advantage of redundancies in the spectral data cube to maintain the FOV while speeding up acquisition. Further work is needed to advance these computational techniques to the level where they can be useful in resolution-intensive applications such as fluorescence microscopy.
Despite the availability of a wide range of multispectral imaging systems, none are purpose-built for large FOV imaging and high-throughput screening. As a result, commercial automated microscopes typically image only a handful of spectral channels sequentially over a single small FOV per well () on a 96-well plate. This leads to poor statistics and precludes imaging of extended samples such as neurons or cell colonies in their entirety. Imaging an entire 96-well plate with these systems is possible but extremely time intensive: it takes 3.5 h to image two color channels of the entire area of a 96-well plate at 1 μm resolution. A scaled-up screen with hundreds of plates and 10 or more spectral channels would require a prohibitive amount of time and raises concerns regarding filter combinations. Such an experiment is at present unrealistic, yet there is incredible value in multispectral high-content imaging. The number of first-order image analysis metrics such as cell area or protein expression grows linearly with the number of fluorescent channels , but more importantly, the number of second-order image metrics relating pairs of fluorescent channels (such as colocalization scores) grows as . Furthermore, a continuous spectrum enables the detection of subtle spectral shifts from environmental sensors (e.g., pH-sensitive dyes) and ratiometric dyes.
We demonstrate a novel multispectral microscope that uses a centimeter-scale microlens array to massively parallelize a whiskbroom multispectral microscope. Each microlens in the array enables parallelization and provides the effective pinhole required by a whiskbroom imager. The system simultaneously records the spectra from thousands of points in the sample, compared to a single point in a traditional whiskbroom system [16,17]. The parallelization of our system enables high throughput multispectral imaging over centimeter-scale FOVs with less than a millimeter of mechanical scanning [18–20].
With our microscope, we record a 13-channel gigapixel fluorescence image in a single acquisition, which we then demix into six independent channels. To our knowledge, this is the first time that a gigapixel mutispectral microscope image has been acquired. In addition, we show that when combined with a blind unmixing algorithm, our system can spectrally separate a key cell proliferation marker, making clear the applicability of our system to cancer drug screening.
The difference between a hyperspectral confocal microscope and our system can be likened to the increase in throughput afforded by Nipkow spinning disk confocal microscopes. Spinning disk confocal systems increase imaging speed dramatically by illuminating many points in the sample at once . However, spinning disk systems cannot be modified into parallelized multispectral systems, due to their inherent geometry. The spinning pinhole disk undergoes multiple rotations during a single exposure, preventing the system from recording the spectra at each confocal point. Our solution presents a parallelized architecture for multispectral confocal microscopy by taking maximum advantage of high-bandwidth image sensors.
A. Optical System
The multispectral microlens microscope is depicted in Fig. 1(a). Multiple collinear laser beams (405/488/561/647 nm for HeLa cell experiments; 473/532/658 nm for bead experiments) are focused down to an array of focal spots by a 0.24 NA refractive microlens array. The microlens array is fabricated using the photoresist reflow technique followed by replica stamping in Norland Optical Adhesive 61 on glass [18,19]. The microlens array is close packed in the vertical direction but has a larger pitch along the horizontal axis [Fig. 1(d)]. The focal spot size (and therefore the spatial resolution) is 1.20 μm at 473 nm, 1.30 μm at 532 nm, and 1.44 μm at 658 nm . The microlens array has a focal length of 248 μm and is free from any noticeable longitudinal chromatic aberration.
A fluorescent sample is placed on the piezo stage (Newport NPXY200SG) at the microlens array focal plane and is raster scanned over the pitch of the microlens array. When the excitation focal spot excites a fluorophore in the sample, a fluorescent signal is emitted. The microlens captures a portion of that emission and relays it back to an SLR lens (50 mm, f/1.4, Nikon) via a quad-band dichroic mirror (405/488/561/647 nm for HeLa cell experiments; 405/473/532/660 nm for bead experiments, Chroma) and through a wedge prism placed directly in front of the SLR lens. The SLR lens images the microlens array onto a camera sensor (Point Grey Grasshopper 3) with roughly demagnification. The camera is operated at 200 frames per second, and the speed of the piezo stage is set such that the sample moves 500 nm between each camera frame. Thus, the pixel size for all images in this work is 500 nm.
The aperture setting of the SLR lens plays the role of confocal pinhole for all microlenses simultaneously [20,22,23]. To filter out reflected laser light, appropriate long-pass and notch filters are placed directly in front of the camera sensor. A wedge prism (2° or 4° nominal ray deviation, N-BK7, Edmund Optics) is placed directly in front of the SLR lens. The small dispersion of the prism glass deflects light by a wavelength-dependent angle. The resulting image at the camera sensor is spectrally dispersed, as shown in Fig. 1(b). This dispersion transforms the image of each microlens aperture from a circle into a line. The intensity profile of this line is the fluorescence emission spectrum; in addition to the total fluorescence at each point in space, our system also records the spectral distribution of light at each point in space. The brightness of each point in the microlens spectrum is a measure of the relative fluorescent intensity at a given wavelength, at that microlens’s focal spot in sample space. To acquire an image, the sample is raster scanned in 2D over the pitch of the focal spot array. The image of the sample area under a single microlens (a subimage) is constructed by rearranging the intensity trace of the microlens into two dimensions. A large FOV image including contributions from all microlenses is constructed by stitching together neighboring subimages.
The geometry of the microlens array along with the angle of the wedge prism is tailored to the sample structure and the spectral sampling required. The distance between neighboring microlens columns needs to be large enough to accommodate the bandwidth of that sample’s fluorescence spectrum. We use a microlens column spacing of 600 μm, corresponding to approximately 13.25 pixels at the image sensor. The wedge prism angle is chosen so that the expected fluorescent emission bandwidth of the sample does not exceed the blank space between microlens columns. The lower end of the spectrum in the bead images (Sections 2.D and 2.E) is set by the 500 nm long-pass emission filter, whereas for the HeLa cell images (Sections 2.C and 2.E) it is set by a 405 nm notch filter. The upper limit of 700 nm is set by a combination of the emission spectrum of the most red-emitting fluorophores in our sample and the response of our visible wavelength camera. Ideally, the emission spectra will be dispersed over a distance on the sensor not exceeding the blank space between microlens columns. For a 125 μm diameter microlens with a 600 μm column pitch, and assuming a demagnification factor of , this maximum spectrum size is 58 μm at the image sensor or 475 μm at the microlens plane. For images of fluorescent beads (Sections 2.D and 2.E), we use a 4° N-BK7 wedge prism, (7.68° physical wedge angle), which produces an angular spread of 0.065° between 500 and 700 nm light [Fig. 1(c)]. This spread covers an equivalent distance of 450 μm at the microlens image plane (roughly 400 mm from the SLR lens), just under the maximum allowed before there is bleed-through between neighboring microlens columns. A larger spectral range of 430–730 nm is obtained with a 2° N-BK7 wedge prism (3.85° physical wedge angle), which is used for the imaging of HeLa cells in Sections 2.C and 2.E. This smaller angle prism also has the advantage of increased signal at the expense of coarser sampling.
We note that our approach has an inherent flexibility not present in a typical pushbroom or whiskbroom system. By varying the microlens array column pitch and prism angle, one can control the trade-off between spectral sampling density and spatial throughput. In traditional systems, one cannot reassign a detector element from spatial to spectral or vice versa without completely redesigning the system. Such flexibility could be useful if one desired to increase spatial throughput at the expense of spectral information. For example, a nearly close-packed microlens array and a small angle wedge prism could be employed in cases where the user wants to measure a peak shift of a single type of fluorophore. Movement of the apparent microlens position along the dispersion direction would be sufficient to quantify the spectral shift. This could be useful for imaging modalities where peak shifts are important, such as fluorescence resonance energy transfer (FRET) and ratiometric fluorescent techniques (calcium, oxygen, and pH indicators) .
B. Spectral Resolution
Before measuring the spectral resolution of our system, we need to know the conversion factor from camera pixels to nanometers. For simplicity we assume that the pixel-to-nanometer mapping is linear throughout the visible range, though small deviations due to nonlinear dispersion of the prism will occur. This linear assumption leads to a maximum error of 16 nm (29 nm), with a 4° (2°) N-BK7 prism from 500 to 700 nm (430 to 730 nm).
For a 4° wedge prism, we start characterization by recording the emission spectra of two types of fluorescent beads (Dragon Green and Sky Blue, Life Technologies and Spherotech, respectively) with a commercial USB spectrometer [Ocean Optics; result shown in Fig. 2(a)]. These beads are then imaged with our multispectral microlens microscope [Fig. 2(b)]. The distance between the emission peaks, as measured by the spectrometer, is 166 nm (679 nm for Sky Blue, 513 nm for Dragon Green), whereas the equivalent peak-to-peak distance on the camera of the microlens microscope is 8 pixels. Therefore, the spectral sampling density with a 4° wedge prism is 21 nm/pixel.
We measure the spectral resolution experimentally by illuminating the microlens array with a ring of white LEDs [dashed box, Fig. 1(a)], without a sample present. The microlens array is organized in a series of columns, with a pitch of 125 μm in the vertical and 600 μm in the horizontal direction, respectively. A 4° wedge prism is placed in front of the SLR lens, and either a 10 nm bandpass filter centered at 473 nm or a 15 nm bandpass filter centered at 658 nm is placed in front of the camera.
In the above configuration, the image formed on the camera is a series of columns—each a column of microlenses—as shown in Fig. 2(c). Because only a narrow spectral band of light is imaged onto the sensor, the resulting dispersed image of each microlens appears as a narrow spectral signal. The 10–15 nm bandwidth of the filter is narrower than the sampling density, so the resulting measured spectrum approximates the spectral impulse function. We quantify the width of the spectral impulse by fitting the data to a Gaussian and reporting the resulting full width at half-maximum (FWHM). At an SLR aperture setting of f/8, the spectral impulse FWHM is 32.7 nm (1.56 pixels) at 473 nm, and 42.0 nm (2.00 pixels) at 658 nm. This means that at f/8, the effective spectral resolution is Nyquist limited to 42.0 nm (2 pixels) over the entire spectral bandwidth due to the discrete sampling of the camera. As expected, the spectral resolution in the red is slightly worse than in the blue due to the image side diffraction limit and longitudinal chromatic aberration of the SLR lens.
The aperture setting controls the blurring of the point spread function (PSF) on the camera itself, thereby affecting the spectral resolution. Repeating the experiment above with an aperture setting of f/16 broadens the spectral impulse FWHM to 41.6 nm (1.98 pixels) at 473 nm, and 52.9 nm (2.52 pixels) at 658 nm. For the rest of the results in this work, we use an aperture setting of f/8 for a good compromise between spectral resolution, signal level, and confocal filtering.
The wavelength axis for experiments using a 2° wedge prism is obtained in a similar fashion, using the known emission maxima of the nuclear stain DAPI (460 nm) and of the whole cell stain (673 nm), and assuming uniform spectral sampling in between (30.4 nm/pixel).
It is important to note that the ability to distinguish between two fluorophores is not described by the spectral resolution itself, but rather by the efficacy of the spectral unmixing procedure and the similarity of the fluorophore spectra.
C. Whole-Well Imaging
We envision the multispectral microlens microscope having applicability in drug discovery applications, where large FOVs and large imaging throughput is essential. To this end, we demonstrate whole-well multispectral imaging of a HeLa cell culture. An entire 6.5 mm diameter single microwell of HeLa cells from a 96-well microwell plate is imaged in a single acquisition and is shown in Fig. 3(a). (A full-size version of Fig. 3(a) is available online ). One well is excited with four lasers simultaneously: 405, 488, 561, and 647 nm (Coherent OBIS), and a 2° N-BK7 wedge prism is used for dispersion. The cells are labeled with DAPI, AlexaFluor 488 (to label Ki67, a cell proliferation protein, by immunofluorescence), AlexaFluor 555 phalloidin (labels actin), and a whole cell stain (Thermo Fisher Scientific, labels the entire cell body). Figures 3(a) and 3(b) are constructed by assigning data from the 521, 612, and 673 nm spectral bins to red, green, and blue image channels, respectively. Green fibrous actin filaments are visible in the cytoplasm of the cells in the high-magnification FOV of Fig. 3(b). Though the image is displayed in a three-channel format (red/green/blue), there is an 11-sample fluorescence spectrum available at every pixel of the megapixel image of Fig. 3(a). There are billion spatial–spectral samples in this single-well dataset.
A complete spectral stack of a single microlens FOV [Fig. 3(b)] is shown in Visualization 1, and Fig. 3(c) shows the full recorded fluorescence spectrum—ranging from 430 to 730 nm—at three points in the FOV. Point I, located in the nucleus of a cell, contains signals from DAPI, AlexaFluor 488, and the whole cell stain. Point II is located in the cytoplasm, where AlexaFluor 555 phalloidin is the dominant emitter. The whole cell stain is also present in this region, but its contribution is much smaller than that of AlexaFluor 555 phalloidin and cannot be resolved without demixing methods. Point III is located near the middle of the cell but outside its nucleus. The whole cell stain signal is concentrated in this region, as it is the thickest region of the cell. On the other hand, this pixel contains no contributions from either DAPI or AlexaFluor 488, but may in general contain signal from AlexaFluor 555 labeling actin in the middle of the cell.
Of interest for high-content screening applications is the variation of signal at the 521 nm wavelength bin [red in Fig. 3(b)]. This signal is mainly due to the prescence of AlexaFluor 488 labeled Ki67 protein in the cell nuclei. Ki67 is a standard cell proliferation marker often used in cancer cell experiments . Even without applying spectral unmixing techniques, one can see pink granules in some cells and not in others. The prescence of Ki67 indicates that the cell is actively dividing, whereas a lack of Ki67 is a sign that a cell has stopped dividing. This variation will be more apparent after applying the demixing methods in Section 2.E.
D. Gigapixel Imaging
The full power of our multispectral approach is realized when there are many types of fluorophores in one sample. We prepared a sample with six types of fluorescent beads dried onto a well plate bottom. The sample was imaged with a 4° wedge prism and simultaneous 473, 532, and 658 nm illumination. The entire dataset is composed of 9 wells of a 96-well plate, for a total image size of 1.30 gigapixels with 13 spectral samples and six independent fluorescent channels. The raw data contain 16.8 billion spatial–spectral samples. To the best of our knowledge, this is the largest multispectral fluorescence microscopy dataset reported in the literature.
Each well is roughly 6 mm in diameter, resulting in single-well images measuring pixels. The image acquisition time is 27.5 min, for an average throughput of 10.2 million spatial–spectral samples per second. To put this into context, a typical high-content imaging system using a objective will image two channels of the entirety of each well of a 96-well plate in 210 min, or 105 min/channel (personal observation; data not shown). Our system scales up to 23 min/channel, nearly a fivefold improvement in speed. The comparison is incomplete, however, because commercial systems are not designed to acquire more than five channels, less than half of what we demonstrate.
In order to visualize the dataset, each multispectral well image is downsampled from 13 to 3 spectral samples, and saved as a three-channel RGB image. A scaled down version of the resulting color gigapixel image is shown in Fig. 4(a). Each well is populated with a different combination of beads. The wells in the leftmost column contain five beads and therefore appear greenish, with noticeable blue and red areas. On the other hand, the bottom two wells in the middle column contain only beads that emit in the middle of the recorded spectrum, and therefore these wells appear completely green.
Magnified views of regions within two of the wells [locations b and c in Fig. 4(a)] are shown in Figs. 4(b) and 4(c). The FOV in region b contains five distinct fluorescent beads, whereas that of region c contains six distinct beads. The bead color is the result of the downsampled spectrum being displayed as a coordinate in RGB color space. Isolated examples of each type of bead in Fig. 4(c) are shown in Fig. 4(d) [at twice the magnification of Fig. 4(c)]. The beads are referred to by their product name (Dragon Green, Envy Green, Suncoast Yellow, and Flash Red, Life Technologies; Pink and Sky Blue, Spherotech). The reader can identify each type of bead in Figs. 4(b) and 4(c) by matching sizes and colors to the examples in Fig. 4(d).
The visualization of the dataset in Fig. 4 is similar to what would be recorded by an RGB camera (though an RGB camera has lower light throughput due to the absorbing color filters). However, the underlying data are multispectral, which allows further separation of the image data into their constituent fluorescent channels.
E. Spectral Unmixing
Spectral signals from a multiplicity of fluorophores can be separated using linear unmixing methods in postprocessing. The response of the system to each type of fluorophore can be extracted either via blind spectrum estimation  or by manually identifying the spectra in the data . Once the response for each fluorophore is known, singular value decomposition is employed to estimate the contribution of each fluorophore at every pixel in the image.
We apply a user-guided unmixing process to split raw multispectral data into their six constituent fluorescent channels. A MATLAB script prompts the user to identify pixels of an image that contain distinct fluorophores. The script then shows the user a plot of each of the spectra identified by the user. If the spectra agree with what the user expects from the sample, the computer starts a linear unmixing process using singular value decomposition with these spectra as input. If the spectra do not meet the user’s expectation (for example, the user identified the wrong pixel), then the user has an opportunity to identify the spectra from the image once again. Figure 5(a) shows the ground truth spectra for all six beads used in this experiment, as recorded by a commercial spectrometer. Figure 5(b) is the result of the user-guided spectra identification process, which agrees well with the ground truth curves in Fig. 5(a). The spectra are color coded by bead type, which are listed in Fig. 5(c) along with the demixing results.
The user-identified spectra in Fig. 5(b) are used as input for demixing part of the top-right well in Fig. 4(a). This well contains all six types of beads. We use the singular value decomposition (svd) function in MATLAB to demix the 13 sample multispectral dataset into six independent fluorescent channels, one for each type of bead . The result is shown in Fig. 5(c) as a spectral stack and then a composite false color image in RGB. An animation of the unmixed channels in a single microlens FOV is shown in Fig. 5(c) (Visualization 2). By visual inspection, the separation quality is good for all beads except the Pink beads, which are both dim and spectrally very similar to the Envy Green beads. On rare occasions, the signal from a point in the sample will saturate a camera pixel. When this occurs, the apparent signal strength at the peak wavelength is lower than it should be because it is clipped at the maximum pixel value. As a result, cross talk from other spectral channels may overcontribute at these pixels after unmixing, causing minor artifacts where the center of a bead can appear a different color from its periphery. These artifacts can be completely eliminated with a higher-dynamic-range camera or by careful laser power balancing.
We quantify the separation accuracy by constructing an unmixing separation matrix, shown in Fig. 5(d). This matrix is assembled by first identifying five beads of each type to establish a ground truth. The total signal within each bead type is then summed for each channel (Ch 1–6) and displayed as a column in the matrix. For example, the Dragon Green column shows how much signal from the five Dragon Green beads was unmixed into Channels 1–6. The percentages on the bottom show the percentage of the signal of each bead type that was unmixed into the correct channel. For example, 86% of the Dragon Green signal was unmixed into Channel 1, whereas 77% of the Pink bead signal was correctly unmixed into Channel 2.
The unmixing separation matrix can also be read row-wise. Each row describes the amount of signal in a channel that originated from a given bead type (i.e., the Ch 1 row shows the amount of signal in Channel 1 that originated from each bead type). The percentages on the right show the percentage of each channel that originated from the correct bead type. For example, 98% of the signal from Channel 1 came from Dragon Green beads. Perfect unmixing would yield a diagonal unmixing separation matrix, with 100% of the signal from each fluorophore being unmixed into the correct channel (and vice versa).
Unsurprisingly, the spectrally similar Pink, Envy Green, and Suncoast Yellow beads cause most of the spectral cross talk in the system, with 77%, 87%, and 79% unmixing efficiencies, respectively. However, it is impressive that Pink and Envy Green beads are unmixed at all, given that their spectra measured by the microscope are nearly identical [Fig. 5(b)], and that their spectral peaks are just 10 nm apart [Fig. 5(a)]. On the red side of the spectrum, Sky Blue and Flash Red beads have a slightly larger peak separation at 21 nm. These beads are unmixed with no visible artifacts in either of Channels 5 and 6.
In more complicated biological samples, there may not be any points in the sample at which there is only a single species of fluorophore, making it impossible to obtain reference spectra with the user-assisted scheme above. In these cases, the fluorophore spectra can be estimated using a blind demixing algorithm . We use the MATLAB nonnegative matrix factorization function (nnmf) to estimate the fluorophore spectra from our whole-well HeLa cell sample and proceed to demix the spectral dataset into four independent channels. The algorithm requires as input the multispectral image data and a target matrix rank that is equal to the number of fluorophore species present in the sample. We found that running the algorithm on small successive chunks of the spectrum can greatly improve the accuracy of spectral estimation. First, spectral data from 430 to 612 nm are used as input to the nonnegative matrix factorization algorithm, with a target matrix rank of 3, to estimate the spectra of DAPI, AlexaFluor 488, and AlexaFluor 555. The spectrum of the whole cell stain, which does not emit in this spectral range, is estimated by applying the nonnegative matrix factorization algorithm to the 612–734 nm spectral range, with a target matrix rank of 2. An estimate for the spectrum of AlexaFluor 555 at 612 nm is obtained from both runs of the unmixing algorithm. A continuous spectral estimate for AlexaFluor 555 is constructed by concatenating the two estimates and setting their values equal at 612 nm. Figure 6(a) shows the full set of four blind spectral estimates, which agree well with the known emission maxima of each dye, up to the expected accuracy of our technique (). Once in hand, the fluorophore spectra estimates can be used to unmix spectral data just as in the user-assisted case. Figures 6(b1)–6(b4) show a small portion of an FOV in each of the four demixed channels. The full FOV is shown in Fig. 6(c). Note the red granules coming from AlexaFluor 488-labeled Ki67 protein in the cell nuclei. Actively dividing cells will express the protein (red dots and red background appear in the nucleus), while cells that have stopped dividing will not contain the protein (no red dots or background in the nucleus). This difference is visible in various nuclei in Fig. 6(c); two nuclei are highlighted in Figs. 6(b1) and 6(b2) to emphasize this difference: cell I is expressing Ki67, while it is almost completely absent from cell II. When automated over an entire large dataset, this analysis can be used to help identify potential cancer drugs.
We have introduced a gigapixel-scale multispectral fluorescence microscopy system, suitable for high-throughput applications such as high-content screening. To our knowledge, this is the first microscopy system to record a gigapixel multispectral image. The system uses a microlens array to parallelize point scanning fluorescence microscopy over a large area. A single physical aperture provides the confocal filtering necessary for multispectral imaging. A simple wedge prism is used to disperse fluorescence emission onto a camera, yielding a spectral curve at each point in the sample. User-assisted spectrum identification was employed to unmix up to six fluorescent channels with as little as 10 nm spectral separation. Blind demixing was also employed to unmix a multispectral HeLa cell image into a four-channel image showing variation of a typical cell proliferation marker, Ki67. Such an analysis is the basis of a cell proliferation assay, variants of which are a key component of cancer drug screening—promising drug candidates halt uncontrolled cell proliferation. The increased throughput afforded by our system could help to identify potential drugs more quickly and efficiently.
Unlike hyperspectral confocal microscopes, spatial throughput/spectral sampling trade-offs are easily adjusted with our system by swapping different wedge prisms and microlens arrays. Throughputs upwards of 20 million samples per second are possible using substrates with a larger packing density (less than half the area of a well plate is covered in sample), or by use of spectrum-splitting optics to make use of this blank space on the camera .
One future avenue of development is small angle wedge prism systems that record very coarse fluorescent spectra but still allow for demixing of small numbers of fluorophores at higher spatial throughputs. In addition to providing a multispectral imaging platform for high-content screening, our system may also find applicability in monitoring small spectral shifts of single fluorophore species in time and/or space, for example, in a FRET assay.
The authors would like to thank Kenneth Crozier for insightful discussions.
1. J. Pawley, Handbook of Biological Confocal Microscopy (Springer, 2006).
2. F. Zanella, J. B. Lorens, and W. Link, “High content screening: seeing is believing,” Trends Biotechnol. 28, 237–245 (2010). [CrossRef]
3. M. Alterman, Y. Y. Schechner, and A. Weiss, “Multiplexed fluorescence unmixing,” in 2010 IEEE International Conference on Computational Photography (ICCP) (IEEE, 2010).
4. T. Zimmermann, “Spectral imaging and linear unmixing in light microscopy,” in Microscopy Techniques, Vol. 95 of Advances in Biochemical Engineering (Springer, 2005), pp. 245–265.
5. B. Kraus, M. Ziegler, and H. Wolff, “Linear fluorescence unmixing in cell biological research,” Mod. Res. Educ. Top. Microsc. 2, 863–873 (2007).
6. M. Bickle, “The beautiful cell: high-content screening in drug discovery,” Anal. Bioanal. Chem. 398, 219–226 (2010). [CrossRef]
7. M. E. Gehm, M. S. Kim, C. Fernandez, and D. J. Brady, “High-throughput, multiplexed pushbroom hyperspectral microscopy,” Opt. Express 16, 11032–11043 (2008). [CrossRef]
8. Q. Li, X. He, Y. Wang, H. Liu, D. Xu, and F. Guo, “Review of spectral imaging technology in biomedical engineering: achievements and challenges,” J. Biomed. Opt. 18, 100901 (2013). [CrossRef]
9. G. Di Caprio, D. Schaak, and E. Schonbrun, “Hyperspectral fluorescence microfluidic (HFM) microscopy,” Biomed. Opt. Express 4, 1486–1493 (2013). [CrossRef]
10. Nikon C2+ Specifications, 2015, http://www.nikoninstruments.com/Products/Microscope-Systems/Confocal-Microscopes/C2-Confocal2/(specifications).
11. B. Geelen, N. Tack, and A. Lambrechts, “A snapshot multispectral imager with integrated tiled filters and optical duplication,” Proc. SPIE 8613, 861314 (2013).
12. L. Gao, R. T. Kester, N. Hagen, and T. S. Tkaczyk, “Snapshot image mapping spectrometer (IMS) with high sampling density for hyperspectral microscopy,” Opt. Express 18, 14330–14344 (2010). [CrossRef]
13. A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 47, B44–B51 (2008). [CrossRef]
14. M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Opt. Express 15, 14013–14027 (2007). [CrossRef]
15. A. A. Wagadarikar, N. P. Pitsianis, X. Sun, and D. J. Brady, “Video rate spectral imaging using a coded aperture snapshot spectral imager,” Opt. Express 17, 6368–6388 (2009). [CrossRef]
16. P. M. Lundquist, C. F. Zhong, P. Zhao, A. B. Tomaney, P. S. Peluso, J. Dixon, B. Bettman, Y. Lacroix, D. P. Kwo, E. McCullough, M. Maxham, K. Hester, P. McNitt, D. M. Grey, C. Henriquez, M. Foquet, S. W. Turner, and D. Zaccarin, “Parallel confocal detection of single molecules in real time,” Opt. Lett. 33, 1026–1028 (2008). [CrossRef]
17. J. Eid, A. Fehr, J. Gray, K. Luong, J. Lyle, G. Otto, P. Peluso, D. Rank, P. Baybayan, B. Bettman, A. Bibillo, K. Bjornson, B. Chaudhuri, F. Christians, R. Cicero, S. Clark, R. Dalal, A. deWinter, J. Dixon, M. Foquet, A. Gaertner, P. Hardenbol, C. Heiner, K. Hester, D. Holden, G. Kearns, X. Kong, R. Kuse, Y. Lacroix, S. Lin, P. Lundquist, C. Ma, P. Marks, M. Maxham, D. Murphy, I. Park, T. Pham, M. Phillips, J. Roy, R. Sebra, G. Shen, J. Sorenson, A. Tomaney, K. Travers, M. Trulson, J. Vieceli, J. Wegener, D. Wu, A. Yang, D. Zaccarin, P. Zhao, F. Zhong, J. Korlach, and S. Turner, “Real-time DNA sequencing from single polymerase molecules,” Science 323, 133–138 (2009). [CrossRef]
18. A. Orth and K. Crozier, “Microscopy with microlens arrays: high throughput, high resolution and light-field imaging,” Opt. Express 20, 13522–13531 (2012). [CrossRef]
19. A. Orth and K. Crozier, “Gigapixel fluorescence microscopy with a water immersion microlens array,” Opt. Express 21, 2361–2368 (2013). [CrossRef]
20. A. Orth and K. B. Crozier, “High throughput multichannel fluorescence microscopy with microlens arrays,” Opt. Express 22, 18101–18112 (2014). [CrossRef]
21. T. Tanaami, S. Otsuki, N. Tomosada, Y. Kosugi, M. Shimizu, and H. Ishida, “High-speed 1-frame/ms scanning confocal microscope with a microlens and Nipkow disks,” Appl. Opt. 41, 4704–4708 (2002). [CrossRef]
22. H. J. Tiziani and H.-M. Uhde, “Three-dimensional analysis by a microlens-array confocal arrangement,” Appl. Opt. 33, 567–572 (1994). [CrossRef]
23. E. Schonbrun, W. N. Ye, and K. B. Crozier, “Scanning microscopy using a short-focal-length Fresnel zone plate,” Opt. Lett. 34, 2228–2230 (2009). [CrossRef]
24. Y. Urano, D. Asanuma, Y. Hama, Y. Koyama, T. Barrett, M. Kamiya, T. Nagano, T. Watanabe, A. Hasegawa, P. L. Choyke, and H. Kobayashi, “Selective molecular imaging of viable cancer cells with pH-activatable fluorescence probes,” Nat. Med. 15, 104–109 (2009). [CrossRef]
25. A. Orth, “Whole well HeLa cell hyperspectral image,” GigaPan, 2015 [retrieved 3 April 2015], http://goo.gl/aKLSmj.
26. T. Scholzen and J. Gerdes, “The Ki-67 protein: from the known and the unknown,” J. Cell. Physiol. 182, 311–322 (2000). [CrossRef]
27. A. Munoz-Barrutia, J. Garcia-Munoz, B. Ucar, I. Fernandez-Garcia, and C. Ortiz-de-Solorzano, “Blind spectral unmixing of M-FISH images by non-negative matrix factorization,” in 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2007 (EMBS 2007) (2007), pp. 6247–6250.