We present a multichannel fluorescence microscopy technique for high throughput imaging applications. A microlens array with over 140,000 elements is used to image centimeter-scale samples at up to 18.1 megapixels per second. Large field-of-view multichannel fluorescent imaging is demonstrated in both sequential and parallel geometries. The extended dynamic range of this approach is also discussed.
© 2014 Optical Society of America
Fluorescence microscopy is widely used in biological research to visualize morphology from the whole organism down to the cellular level . In the field of high content screening (HCS), fluorescence microscopy followed by image analysis is used to quantify the reactions of cells to candidate drugs at varying dosages [2,3]. Typically this involves imaging microwell plates. At the time of writing, automated microscopes are limited to at least ~1-2 seconds per imaging position , and are outfitted with cameras with up to 4.66 megapixels . Such microscopes therefore achieve throughputs of ~4.66 megapixels per second (Mpx/s) at best. For 7.3mm square wells imaged at 0.5μm/pixel, this corresponds to ~73 minutes per 96-well plate. The imaging throughput represents a bottleneck to drug discovery efforts, and methods that improve upon it would be greatly advantageous.
A conventional automated wide-field microscope builds up an image by stitching together smaller fields-of-view (FOVs), each imaged by a microscope objective. After each of small FOV is acquired, the sample must be moved by a distance equal to the linear dimension of the field of view before the next image can be taken. Additionally, the position of the microscope objective must be adjusted so that the subsequent image is in focus. Consequently, most of the image acquisition time is spent on mechanical movement rather than on photon collection.
Recently, arrayed and structured illumination imaging platforms that aim to reduce imaging time have been introduced [6–12]. The general theme is to illuminate the sample with an array of focal spots that act as a group of scanning microscope. Photons are collected with a camera at a high speed, increasing the information throughput. Crucially, all of these methods allow for photon collection while the sample is moving – which is impossible with a regular wide field microscope due to motion blur. Continuously imaging during sample movement relaxes the demands for fast stage accelerations over large distances.
These focal spot scanning techniques have been demonstrated in a brightfield configuration by employing a holographic mask to shape an illuminating laser beam into an array of focal spots [6,10]. These demonstrations create images where light absorption is the contrast mechanism. Similar systems have also been used to image fluorescence where the focal spot array is created by the Talbot effect generated by a microlens array illuminated by a laser beam [7,8]. The work in this paper is the latest and most advanced implementation of our previously reported large FOV microscope that uses a microlens array to directly focus a laser beam into a grid of focal spots [11,12]. Most significantly, this is the first multi-wavelength demonstration of our high throughput microscopy technique.
We demonstrate acquisition of high-resolution fluorescence images free of ghosting and streaking artifacts that are often seen in other multi-spot microscopy systems (Fig. 3 in , Fig. 5 (b1) in  and Fig. 6 in ). Ghosting and low contrast can be particularly problematic in systems where the focal spot grid is created by diffractive effects because any residual laser excitation outside of the focal spots contribute to background, thus degrading the signal to noise ratio, resolution and decreasing overall system efficiency. Our system’s focusing mechanism is entirely refractive which results in high efficiency and signal to background ratio.
2. System setup and characterization
Our high throughput imaging approach eliminates mechanical dead time by continuously imaging the sample as it is being scanned in two dimensions [11,12]. The optical layout of our approach is shown in Fig. 1. A multi-wavelength excitation laser (Laserglow, 473/532/658nm) emits a beam with an output power of up to 150mW per channel. The beam is expanded by a 50x microscope objective and reflected into the optical path by a quad-band dichroic mirror (Chroma zt405/473/532/660rpc). A 125μm pitch hexagonal grid microlens array (MLA) splits the laser beam into an array of focal spots. Because the laser beam is not collimated when it hits the microlens array, the microlenses at the periphery will create focal spots at a field angle of up to . This is indicated by the slight tilt of the focal spots created by the microlens array in Fig. 1.
Each microlens in the array functions as an independent point scanning microscope by collecting the fluorescence emitted at its focal spot and relaying it back to a CMOS camera (Basler acA2000-340km) via a 50mm focal length, f/1.4 single lens reflex (SLR) camera lens with an aperture set to f/8. The SLR lens is placed 400mm from the microlens array, equal to the separation between the focal plane of the expansion microscope objective and the microlens array.
The fluorescence collected by each microlens exits its aperture at the same angle as the laser illumination at that given position in the microlens array. Consequently, the fluorescence converges to the center of the SLR aperture. This geometry is critical for avoiding vignetting from microlenses towards the periphery of the array. For example, if the illuminating laser beam were collimated, some of the signal from non-central microlenses would miss the physical aperture of the SLR lens. This is a similar concept to the de-scanning mirrors in a point scanning confocal microscope . Without de-scanning the fluorescence collected by the microscope objective, the signal in a confocal microscope would miss the confocal pinhole for non-zero field angles (ie. away from the center of the FOV).
A fluorescence filter between the SLR lens and the camera filters the wavelength range appropriate for the fluorophore being employed. The SLR placement (400mm from the MLA) results in a ~7x de-magnified image of the microlens array at the camera sensor plane. The fluorescence distribution on the camera sensor is an array of bright spots (Fig. 1, (i) and (ii)), with each corresponding to an image of a microlens. The brightness of each microlens imaged in this way is proportional to the fluorescence excited from the sample at its focal spot. The set of camera sensor pixels corresponding to the image of each microlens can therefore be thought of as playing the same role as the point detector of a conventional scanning confocal microscope.
The camera acquires a video at 200 frames per second (fps) with the camera gain set to its minimal value. The sample is raster scanned under the focal spot array as the movie is recorded. The sample sits on a closed loop piezoelectric stage (Newport NPXY200SG) which is driven by a sawtooth wave to yield a speed of 100μm/s along the x-direction, resulting in a sampling density of 0.5μm per camera frame. A slow linear ramp of 0.37μm/s is applied along the y-direction. The image of the portion of the sample gathered by each microlens (a sub-FOV) is assembled by summing pixel values of the microlens image in the raw video and reorganizing them appropriately into a two dimensional image. This image has dimensions 135μm x 118μm and a pixel size of 0.5μm. Fluctuations in laser intensity are mitigated by dividing by the normalized average of all microlens intensities from each camera frame. A large FOV image is created by stitching together all of the sub-FOVs on a hexagonal lattice using nonlinear blending to minimize seams .
We use standard optical lithography followed by reflow to fabricate hexagonally packed MLA masters that are subsequently replicated into an optical adhesive (NOA 61) on a glass slide [11,12]. This replicated MLA is used for imaging experiments. The microlenses in this work have diameters of 122μm and sags of 11.7μm. The entire array measures 4.5cm x 4.5cm and contains more than 140,000 microlenses. We observe no measurable axial chromatic aberration: the MLA has a focal length of 248μm2μm (NA = 0.24) for all three laser wavelengths.
The resolution of our system is determined by the focal spot size. The focal spots created by the MLA do not reach the diffraction limit owing to spherical aberration inherent in the reflow molding fabrication technique . We measure the full width at half maximum (FWHM) values of the microlens focal spots by imaging the focal plane with a microscope objective with a numerical aperture (NA) of 0.80 and a magnification of 100×. Gaussian fits are applied to the images of the focal spots (Fig. 1, (iii)), for field angles of 0° and 2.5° (the maximum field angle used in this work). A focal stack is acquired by moving the microlens array slowly through the focus of the microscope objective. We then extract the frame with the highest peak pixel value and use this image for focal spot characterization. The focal length was determined experimentally by recording the distance between this focal spot plane and the base of the microlens. Thus, the working distance Wd of the microlens array is equal to the focal length minus the sag: Wd = 248μm - 11.7μm = 236.3μm.
The small field angle has almost no discernable effect on the focal spot size and shape. The focal spot FWHMs at 0° field angle are 1.44μm0.03μm, 1.30μm0.03μm and 1.20μm0.03μm for red, green and blue excitation lasers, respectively. At a field angle of 2.5°, the FWHMs are 1.47μm0.03μm (red), 1.30μm0.03μm (green) and 1.22μm0.03μm (blue). The uncertainty is a 95% confidence bound for the Gaussian FWHM parameter when fit to an experimentally measured focal spot. The FWHM values are slightly larger than the FWHMs for a diffraction-limited 0.24 NA lens approximated by a Gaussian focal spot: 1.37μm, 1.10μm and 0.98μm, respectively.
Fluorescent samples are not completely flat over large areas. In HCS applications, fluorescent samples typically reside in microwell plates. For our system to have applicability in drug discovery labs that employ HCS, it is necessary that the microwell plate flatness be sufficient to ensure that the height variation within the FOV is less than the depth of field of the system. Figure 2(a) shows a contact profilometry trace of the bottom of an Ibidi 96-well plate. Over a 3cm line trace, the variation in well height is ~15μm. The Fig. 2(a) insets show a focus stack of a FOV containing wrinkled 5.3μm diameter Nile Red beads (535/575nm excitation/emission peaks), imaged with the green excitation channel (532nm) of the microlens microscope. Subjectively, the image quality variation over the 15μm range is nearly imperceptible. We quantify the imaging quality by imaging isolated sub-resolution 500nm Nile Red fluorescent beads (Invitrogen), yielding the point spread function (PSF) of the green channel of the system. The modulation transfer function (MTF) for each imaging depth is calculated by taking the magnitude of the Fourier transform of the measured PSF . In Fig. 2(b) we plot the MTF and PSF for imaging depths 6μm above, 9μm below and at the plane of best focus. As per convention, the MTFs are normalized to unity. Corroborating the observation that that imaging quality varies minimally over a 15μm depth of focus, the MTF has almost no change over this range. Interestingly, at low spatial frequencies, the MTF of our system lies above that of a diffraction-limited 0.24 NA circular aperture wide field system. This is likely due to apodization of the microlens apertures by the steep angle of the lens at the periphery . The result is increasing reflection losses towards the edge of the microlenses. This effect manifests itself as low-pass filtering. Note that this does not mean that our microlenses are more efficient at imaging low spatial frequencies than a diffraction-limited system. Because the MTF plot is normalized, the correct interpretation is that the low spatial frequency signal is boosted relative to the high spatial frequency signal when compared to the diffraction-limited case.
3. Sequential multichannel imaging
To demonstrate the high throughput capabilities of our system, we fill 16 (4 x 4) wells of a 96-well plate (Ibidi) with a mixture of Dragon Green (Bangs Labs), Nile Red (Invitrogen) and Flash Red (Bangs Labs) fluorescent beads. The Dragon Green and Flash Red beads are nominally 7.3μm in diameter and have excitation/emission peaks at 480/520nm, and 660/690nm, respectively. The Nile Red beads have excitation/emission peaks at 535/575nm, are nominally 5.3μm in diameter and have been aged so that their surface is wrinkled, yielding high spatial frequency features. Three separate images of the 4 x 4 well area are acquired with laser wavelengths of 473nm (blue), 532nm (green) and 658nm (red). The camera exposure time is 3.5ms/frame and the approximate optical power in each focal spot is 0.13μW, 0.031μW and 0.037μW for blue, green and red lasers, respectively. For 473nm excitation, a 10nm FWHM bandpass fluorescence emission filter centered at 500nm is used. Long pass filters with cut on wavelengths at 575nm and 664nm are used for excitation wavelengths of 532nm and 658nm, respectively. These excitation wavelength and emission filter combinations separate the beads into their constituent populations without the need for spectral unmixing.
Figure 3 shows a 3-channel image all 16 wells imaged, along with magnified regions of four wells (Figs. 3(b)–3(e)). The red/green/blue color channels of the presented images correspond to the images obtained with red/green/blue lasers, respectively. Color channels are aligned by finding the position of their cross-correlation minimum and then correcting for this offset. Wrinkling of the green-colored beads is evident, while the blue- and red-colored beads have smooth surfaces. All areas of the 4 x 4 well region are in focus owing to the large depth of field of the microlenses, which accommodates both intra- and interwell height variations. The 4 x 4 well region has an area of 12.25cm2, corresponding to 90,550 microlenses. At a frame rate of 200 fps this is a total pixel throughput of 18.1 Mpx/s. A portion of this data corresponds to the plastic support regions between wells (Fig. 2(a)), which contain no fluorescent sample. Also, a small amount of overlap (10μm) between neighboring sub-FOVs is necessary for the stitching process. After accounting for these two factors, the final data set for each color consists of 16 images each 16,000 x 16,000 pixels in size. The acquisition time is 320 s, corresponding to a pixel throughput of 12.8 Mpx/s.
We demonstrate the tissue imaging capability of our system by imaging a 16μm thick slice of mouse kidney (Life Technologies FluoCells Prepared Slide #3), sealed between a microscope slide and a #1.5 coverslip. The tissue section is stained with Alexa Fluor 488 and Alexa Fluor 568 Phalloidin, while the fluorescent stain 4',6-diamidino-2-phenylindole (DAPI) is also present but not imaged. Laser excitation wavelengths of 473nm (focal spot power = 0.20μW) and 532nm (focal spot power = 0.55μW) are used to excite the Alexa Fluor 488 and 568 fluorophores, respectively, whose emission is filtered with long pass filters with cut-on wavelengths of 500nm and 575nm. The camera integration time is set to 4.9ms/frame. As some of the tissue features stained with different fluorophores tend to be co-localized, we align the channels by finding their cross correlation maximum and correcting for that shift. The resulting aligned image is shown in Fig. 4(a), with the Alexa Fluor 488 (568) channel in green (red). The sample has an extent of roughly 0.5cm x 1.0cm. At a moderate zoom level, the brush border and glomeruli can be identified (Fig. 4(b)). At full image size (Fig. 4(c)), fine features such as filamentous actin and microtubules are visible. Figure 4 (Media 1) shows the sample at various magnifications, zooming in from the full field view in Fig. 4(a). and finishing with an image of the region in Fig. 4(c).
4. Parallel dual-channel imaging
Not all fluorescent samples are large enough to take full advantage of the large FOV imaged by our microlens microscope. For example, the throughput of the sequential multichannel system in the previous section is only roughly 0.75 Mpx/s when imaging the small mouse kidney sample. This throughput is modest because the sample is small, meaning that not all of the microlenses within the array are used. To increase throughput for small samples, we employ a parallelized system. The fluorescent signal from the sample is split into low-pass (500nm < λ < 532nm) and high-pass (λ > 575nm) spectral components, and both are imaged simultaneously onto the camera. Spectral decomposition is achieved by inserting a long-pass dichroic mirror (edge λ = 532nm) between the quand-band dichroic mirror and SLR lens in Fig. 1. A pair of broadband mirrors is positioned on either side of the long pass dichroic mirror. The broadband mirrors are tilted at small (and opposite) angles with respect to the long-pass dichroic, sending the fluorescence signals from the short- and long-pass components into the SLR lens at opposing angles. Emission filters are inserted at 45° with respect to each of the broadband mirrors to cut out any residual laser illumination. This arrangement of a long-pass dichroic filter, pair of broadband mirrors and emission filters will be termed a spectral splitting module (SSM).
The spectral components of the fluorescent signal arrive at the SLR lens at opposing angles. As a result, the SLR lens images two copies of the microlens apertures onto the camera sensor – one for each spectral component. The setup is shown schematically in Fig. 5. We use long-pass filters (λ > 500nm and λ > 575nm) as emission filters in the SSM, but in general any pair of filters such as bandpass filters may be used provided that the signal level of each spectral copy is comparable.
The kidney section shown in Fig. 4 is imaged again, this time using the parallel dual-channel setup described above, using 473nm laser excitation. The resulting image is shown in Fig. 6. This parallel geometry image matches closely with the sequentially acquired image; the two are compared in Fig. 6(b). There is some mixing of the two spectral channels because of the emission filters used. Nevertheless, the localization of the fluorescent stains to their appropriate cellular structures is evident. Spectral bleeding can be mitigated or avoided altogether by using bandpass filters within the SSM that are appropriate for the given fluorophores.
Parallel acquisition of two spectral channels results in a direct doubling of effective pixel throughput to 1.5 Mpx/s. This parallel configuration is not limited to small samples, however. Most microwell plates consist of loosely packed rectangular arrays of circular wells. One could duplex two channels on the sensor by interleaving the spectral copies of the rectangular array of wells to form a quasi hexagonally packed array of wells, thereby doubling the throughput. This interleaving approach is possible with any regularly arrayed substrate with less than a 50% packing fraction.
5. Dynamic range
As discussed, our method records the fluorescence collected by each microlens by integrating the pixels associated with its image on the camera sensor. Interestingly, this extends the dynamic range of our imaging technique to exceed that of each pixel of the camera. Recall from Fig. 1, (ii) that each microlens aperture is imaged to an n x n pixel region on the camera sensor. Therefore, even though the camera records an 8-bit video, the conglomeration of the N = n2 pixels assigned to each microlens can take on values from 0 to 255 x N. We will call this combination of N pixels a “superpixel”. The extent of each microlens (122μm), the extent of the camera sensor pixels (5.5μm) and the ~7x demagnification provided by the SLR lens result in the superpixel comprising N = 9 pixels.
The point spread function (PSF) created by the SLR lens on the camera sensor is comparable in size to the image of a microlens, leading to an image of a microlens that resembles a Gaussian spot. As a result, the center of the image of a microlens will saturate its camera pixel before a pixel on the periphery of the microlens image saturates. This restricts off center microlens pixels to smaller dynamic ranges than the center. Thus, unsaturated superpixel values are restricted to values smaller than 255 x 9 = 2295.
To quantify the dynamic range of our system, we record a movie of microlenses relaying fluorescence in the following configuration. The piezo stage is not scanned and the SLR lens aperture is set to f/8. The microlens focal spots are brought into focus so that they excite fluorescence at the sample. We record 300 frames at 200 fps. Laser fluctuations are mitigated by dividing each frame by the average intensity of all microlenses in the frame and then multiplying by the highest average microlens intensity of all of the frames. The signal-to-noise ratio (SNR) of a superpixel i is calculated by dividing the mean superpixel value by its standard deviation σi. The SNR curve for an N = 1 superpixel is calculated using only the central pixel in each superpixel (blue boxes, Fig. 1, (ii)). For the N = 9 case we sum all 9 pixels that make up the image of each microlens and then calculate the SNR using this summed value (red boxes, Fig. 1, (ii)). In order to avoid pixel value saturation, we do not include any superpixels for which any pixel reaches a value of 255 during the movie.
Figure 7 shows a comparison between the SNR curves for superpixels of size N = 1 and N = 9 pixels. The N = 9 curve shows lower SNR than the N = 1 curve due to increased read noise arising from the use of 9 pixels. This property is captured by a model that accounts for shot noiseand camera pixel read noise : SNR = , whereis the signal intensity in grey level units (e.g. 0 to 255 for N = 1) . We fit this model to the data in Fig. 4, resulting in best-fit parameters of [(34.21,13.64); (31.00, 41.97)] for N = 1 and N = 9 superpixels, respectively. For the N = 9 fit we only use data points where every pixel in the superpixel has a grey level above 0. Pixels within a superpixel that contain no signal do not contribute to the overall noise because the read noise is below one grey level. As expected, we find that the read noise for an N = 9 superpixel is 3.1-fold higher than for the N = 1 case, because noise sources within a superpixel add in quadrature.
At the top end of the dynamic range, the SNR is dictated by shot noise. Because an N = 9 configuration can collect more photons before saturation, the maximum SNR is increased. We measure an 87% larger SNR for N = 9 (SNR = 174) than for N = 1 (SNR = 93) at the top of the dynamic range.
The dynamic range is also improved with the N = 9 superpixel configuration. The largest unsaturated signal for N = 9 takes a grey value of 1190 while the fitting results in Fig. 7 imply that the noise floor (SNR = 1) is at a grey level of 1.37. The dynamic range of an N = 9 superpixel is therefore 1190/1.37 = 868 (58.77 dB). For an N = 1 superpixel, the dynamic range is limited to 255 (48.13 dB) by 8-bit digitization. The dynamic range for an N = 9 superpixel is increased 3.4-fold (868/255) over the intrinsic dynamic range of the camera, from 8 bits to 9.76 bits. The increase in dynamic range comes with the caveat that it inherently occurs at higher signal levels – the extra bits come at the top end of dynamic range, as indicated by the dynamic range extension region in Fig. 7.
For weakly fluorescent samples, imaging can become read noise dominated . A superpixel of size N contributes times more read noise than a single pixel, modifying the SNR by a factor of . Therefore, when sensitivity is paramount, it is optimal to reduce the magnification in order to direct all the photons from a single microlens to as few pixels as possible (small N). However, the magnification must be kept large enough to ensure that the images of the microlenses are still resolvable on the camera. The magnification can thus be tailored according to the nature of the sample in order to maximize either sensitivity or dynamic range.
We have demonstrated an extended dynamic range MLA-based high throughput multichannel fluorescence imaging system. Two geometries are presented: sequential and parallel multichannel acquisition. The microscope uses a MLA to image over a large FOV, thereby reducing mechanical dead time inherent in commercial high throughput imagers. We sequentially image three channels of fluorescent beads in a microwell plate at a rate of 3 wells per minute per channel at resolutions up to 1.20μm. The total pixel throughput of our system is 18.1 Mpx/s. When the fact that much of microwell plate consists of plastic support regions (that do not contain beads) and the stitching overhead are taken into account, the pixel throughput is then found to be 12.8 Mpx/s. This is roughly 3x faster than commercial microscopes. Even higher speeds are possible with continuous samples with no dead space . We also demonstrate sequential and parallel multichannel tissue imaging, indicating our system’s applicability in arrayed tissue imaging. We anticipate that this could be used for a variety of applications in biological research requiring high throughput imaging, for example for labeled brain tissue sections (Brainbow ,) in neuroscience. The parallel dual-channel geometry could potentially be used to make up for blank space in the well plate by interleaving two different spectral copies of the well plate onto a single image sensor, thereby doubling the pixel throughput.
This work was supported partly by the National Science Foundation (grant number ECCS-1201687), and partly by Thermo Fisher Scientific. Fabrication was carried out in the Harvard Center for Nanoscale Systems (CNS), which is supported by the NSF.
References and links
4. Olympus ScanR specifications website, http://www.olympus-europa.com/microscopy/en/microscopy/components/component_details/component_detail_21320.jsp. Accessed 22 January 2014.
5. Molecular Devices ImageXpress Micro XLS specifications website, http://www.moleculardevices.com/Products/Instruments/High-Content-Screening/ImageXpress-Micro.html. Accessed 28 January 2014.
8. S. Pang, C. Han, J. Erath, A. Rodriguez, and C. Yang, “Wide field-of-view Talbot grid-based microscopy for multicolor fluorescence imaging,” Opt. Express 21(12), 14555–14565 (2013). [CrossRef] [PubMed]
9. S. A. Arpali, C. Arpali, A. F. Coskun, H. H. Chiang, and A. Ozcan, “High-throughput screening of large volumes of whole blood using structured illumination and fluorescent on-chip imaging,” Lab Chip 12(23), 4968–4971 (2012). [CrossRef] [PubMed]
10. B. Hulsken, D. Vossen, and S. Stallinga, “High NA diffractive array illuminators and application in a multi-spot scanning microscope,” J. Eur. Opt. Soc. Rapid Publ. 7, 12026 (2012). [CrossRef]
13. J. B. Pawley, Handbook of Biological Confocal Microscopy, 3rd ed. (Springer, 2006).
15. F. T. O’Neill and J. T. Sheridan, “Photoresist reflow method of microlens production Part II: Analytic models,” Optik 113(9), 405–420 (2002). [CrossRef]
16. J. W. Goodman, Introduction to Fourier optics (McGraw-Hill International Editions, 1996), Chap. 2.
17. D. Cai, K. B. Cohen, T. Luo, J. W. Lichtman, and J. R. Sanes, “Improved tools for the Brainbow toolbox,” Nat. Methods 10(6), 540–547 (2013). [CrossRef]