In this paper, we present a method of using digital micro-mirror devices to dynamic range enhancement of a digital optical microscope images. Our adaptive feedback illumination control generates a high dynamic range image through an algorithm that combines the DMD-to-camera pixel geometrical mapping and a feedback operation. The feedback process automatically generates an illumination pattern in an iterative fashion that spatially modulates the DMD array elements on pixel-by-pixel level. Via experiment, we demonstrate a system that uses precise DMD control of the projector to enhance the dynamic range ideally by a factor of 573. Results are presented showing approximately 5 times the camera dynamic range, enabling visualization over a wide range of specimen characteristics.
© 2009 Optical Society of America
Digital optical microscope techniques enable a wide variety of image acquisition and enhancement capabilities that collectively represent a major trend in microscope evolution. Applications of digital photography and digital processing techniques have resulted in microscope images with better resolution, enhanced contrast, and reduction of image impairments. Factors that often impair the quality of optical images are optical lens aberrations, imaging device pixel resolution, and dynamic range of both the microscope system and specimen under observation.
Dynamic range (DR) of an imaging device can be defined as the ratio between the maximum and minimum possible brightness values of light intensity that can be detected. In recent years, interests in high dynamic range imaging have opened up a new frontier in research and industry that has underlined the path to the next generation of imaging capture and display devices. In this work, we explore the utilization of digital techniques to overcoming the limited ability of a typical digital camera to capture a wide-dynamic range of specimen features under observation.
The type of detectors used in imaging depends on the area of application. This ranges from the use of photon counting detectors in low light imaging applications to solid states detectors commonly used in digital fluorescence microscopy. However, with the use of solid state detectors and CCD cameras in digital microscopy, it may be difficult to capture subtle variations in the specimen because a CCD camera has a limited number of available brightness values to accommodate the entire range of these variations. This phenomenon is usually responsible for loss of signal in the dimmest or brightest part of the specimen. Therefore, enhancement of dynamic range will not only improve the qualitative visual observation of the specimen but also the quantitative measurement of their intensity levels.
Recently, digital micro-mirror devices (DMD) have served as a key enabling technology in digital imaging. Application of DMD in digital optical microscopy has given rise to different microscope configurations, including digital aperture control and dynamic illuminations [1, 2], spatial multiple-aperture scanning and illumination pattern generation and detection [3–5]. In all these configurations, greater flexibility and control over the mechanical or geometrical structures of the optical path have been demonstrated, in addition to improving image quality. However, none of these structured illuminations have made an attempt to address the limitation imposed by a digital camera on the dynamic range of an optical microscope.
The advantages of a high dynamic range (HDR) image depend on the characteristics of the specimen and the microscopy technique being used. In industrial applications, many products (microchips, ceramics, polymers etc.) are characterized by high opacity and imaging such specimen in brightfield is difficult. Images of these products during inspection are obtained in reflected-light technique. Objects such as integrated circuits consist of components with wide range of light reflectance properties that may produce poor low dynamic range (LDR) images. Thus application of HDR imaging is required. Most biological specimens (such as tissues and cell culture) often exhibit poor contrast because they are very transparent to light in a traditional brightfield microscope. These do not require HDR imaging, and special imaging techniques (such as Fluorescence, phase contrast, differential interference contrast (DIC), etc.) have been developed to increase contrast. However, some biological fluorescence specimens and objects with highly transparent and opaque regions often posses a high contrast range that may require dynamic range improvement in brightfield imaging. For example, images of a honeybee leg (claw) are shown in multiple exposures in Fig. 1. In (a) the transparent part of the tarsus can be seen, but the features in the shadow that correspond to the dark region are not visible. Dark features are revealed in (b), but the transparent part is saturated. Therefore an image that combines features in both transparent and dark region is necessary.
In computer vision and photography, the conventional approach of enhancing the dynamic range of an imaging system is through acquisition of several low dynamic range (LDR) images of the scene with varied exposure time . The set of images is used to compute a high dynamic range (HDR) image and to estimate the camera transfer function. Liquid crystal and DMD-based spatial light modulators have been implemented in some configurations to vary the scene radiance received on each camera pixel in a fashion similar to the varying exposure method [7, 8]. This has enabled high dynamic range imaging at high speed, without the restriction to static scenes imposed by the conventional method. The use of DMD improves the low light efficiency, diffraction effects, and defocusing of the attenuation function commonly encountered in liquid crystal based HDR system.
The combination of LDR images to produce the HDR radiance map depends on how accurately a camera response function can be recovered. This function relates the actual scene radiance to the camera pixel value in the image. One of the methods minimizes the quadratic objective function between the response function and scene irradiances using multiple exposure images and numerical methods [6, 9–11]. In this work, application of DMD to recovery of the camera response function will be treated as a component of our dynamic range enhancement process.
In this work, we present a simple experimental setup of using a DMD to achieve dynamic range enhancement in a transmitted-light microscope configuration. The ability of our system to rapidly modulate the spatial profile of light emitted from the DMD based on camera output without necessarily changing the camera exposure makes our approach faster and more flexible than multiple exposure capture methods. We begin by describing the commonly used numerical methods and our method of applying DMD to recover digital camera response. Secondly, we present our adaptive feedback illumination control (AFIC) which utilizes the recovered response curve to produce an HDR image using the DMD to spatially control specimen-light interaction characteristics. To the best of our knowledge, DMD technology has not been used to enhance the dynamic range of optical microscope images. We demonstrate experimentally a dynamic range enhancement of a honeybee leg, and discuss factors limiting performance. Ideally, AFIC is capable of achieving a dynamic range which equals the product of the dynamic ranges of the DMD and the camera.
2. Spatially controlled illumination microscopy
2.1 Experimental Setup
Figure 2 shows the transmitted-light mode of an optical microscope setup where the source of the white light illumination is provided by a tungsten 130W lamp in the DMD-based digital projector (U3-810W). The projector has a contrast ratio of 650:1 and incorporates one of the early versions of the Texas Instruments DMD chip with 800 × 600 pixels; each mirror is roughly about 17 × 17 microns in size. Reflected light from the “ON” DMD elements is directed through an optical system towards the projector output lens. Based on the working principle of the DMD, the light intensities reflected from the DMD are produced by pulse width modulating the mirror elements over the operating refresh time. Thus the perceived intensity gray level is proportional to the period of time the mirror is switched “ON” during the refresh time. For projector applications, the DMD is pulse width modulated through a procedure that converts the applied video signals into pulse width modulation format. Hence we achieved modulation of the light intensities from the mirror elements through application of an 8-bit image (800 × 600 resolutions) via the VGA input. The image displayed on the DMD is projected to the Iris plane by an achromatic lens L. This plane is conjugate to the specimen plane S and image plane. An infinity-corrected objective lens O1 (Mitutoyo Plan Apo, 0.28, 20x) illuminates the specimen plane with a de-magnified illumination pattern on the DMD plane. The light transmitted through the specimen is focused on the camera plane by an infinity-corrected objective lens O2 (Mitutoyo Plan Apo, 0.28, 10x). Our CCD camera is Qimaging Retiga 2000R (1600 × 1200 resolution, 7.4um × 7.4um pixel size) interfaced with Matlab where all images are captured, saved, processed and displayed on the DMD via VGA input. A computer system with window XP, Intel Pentium (R) D CPU, 3.40 GHz and 2GB of RAM is used for the processing and control.
2.2 Adaptive feedback illumination control
Control over the illumination source in Fig. 2 is achieved by modulating the DMD bit levels with a feedback from the output of an algorithm that operates on the captured images. This configuration allows for structured illumination with wide range of flexibility in the specimen plane. Our adaptive feedback illumination control (AFIC) technique consists of geometric mapping and an adaptive feedback operation between the DMD and camera elements through the optical system.
2.2.1 Geometric mapping
Geometric mapping involves determining the mapping between DMD elements and corresponding group of CCD camera pixels. The number of registered camera pixels per “ON-state” DMD element depends on the camera pixel size Δc and net magnification M between the illumination and imaging arms of the system. Calibration begins by specifying the camera threshold level followed by registration of the spatial locations of the group of camera pixels with levels value higher than the given threshold. The position of the “ON” DMD is shifted to the next location and the calibration process is repeated in sequence. From the characterized point-spread function (PSF) of the setup (shown in Fig. 3), we note that the light reflected from one DMD element is spread over a group of approximately 4×4 camera elements (at 30% of the maximum). The PSF is obtained by measuring the registered level on the camera pixels when a single DMD element is turned “ON” while other elements are switched “OFF”. Generally, this mapping is determined by the diffraction in the optics and magnification of the system.
To reduce calibration time, we perform the spatial calibration on every tenth DMD element (in both dimensions) and linearly interpolate to obtain estimates of both intensity and spatial coordinates of all DMD elements. The calibration process is completed when the lookup table (LUT) has been populated with both intensity and spatial locations (measured or interpolated) of camera elements corresponding to each DMD element position. This entire operation allows us to exercise pixel-by-pixel level of control on the DMD array and to correct for any geometrical shifts in the imaging process.
2.2.2 Adaptive feedback modulation
The adaptive feedback algorithm operates on the captured LDR specimen image to generate an appropriate DMD modulation array as illustrated in Fig. 4. When applied to the DMD, this array spatially modulates the field of illumination to capture specimen features in the saturated region of the image.
The DMD is initialized by setting all elements to illuminate the specimen with maximum light intensity. For an HDR specimen, the image captured under this illumination condition is what we call an initial image and it shows saturation in the brightest region and detail in the darkest region of the specimen. To eliminate saturation, the light-intensity emitted from the corresponding DMD elements is reduced by a constant factor (e.g., 1/2) by seting its level appropriately based on the DMD transfer function. The result is an array of new values that modulates the DMD elements such that saturated regions of the image are illuminated with lower light intensity. This process is repeated until none of the pixels in the final compressed image are saturated. The number of iterations required to produce the final image will depend on the observed specimen characteristics, the size of the intensity reduction between iterations, and the available imaging system dynamic range. Typically, using an intensity step-size of 1/2, only 3–4 iterations are acquired.
To determine the maximum achievable dynamic range for our system, we note that the highest pixel value B max is registered in locations corresponding to the most transparent region in the specimen when illuminated with minimum DMD illumination D min. Similarly, the minimum pixel value B min is registered in locations corresponding to the darkest region in the specimen when illuminated with maximum DMD illumination D max, Hence the DR is given by the ratio of maximum to minimum pixel brightness as
This expression shows that the dynamic range of our system is equivalent to the product of the dynamic ranges of the camera and the DMD expressed in ratio. In practice, DMD background light or overlapping from neighboring elements and noise limits ones useable projector and camera dynamic range respectively. This is discussed in detail in Section 3.
2.3 HDR radiance map construction
Given the final compressed image and final DMD modulation array acquired in the previous Section, it is necessary to construct an HDR radiance map that gives the actual level of expression of the pixels in the final image. This process requires the knowledge of the camera and DMD response functions to expand mathematically the compressed data. In this context, we employ the algorithm to characterize the camera response function that gives the relationship between irradiance and registered pixel value. The irradiance value is obtained by measuring the optical power corresponding to the registered digital pixel value. Since the camera pixel measures the irradiance and converts to output digital levels, it is necessary to characterize the relationship between the digital values applied to the DMD and the corresponding irradiance on the image plane.
2.3.1 DMD characterization
The characterization of the DMD is obtained by using an optical power meter (UTD instruments) to measure the output power at the image plane as we sweep the applied DMD levels sequentially from D min to its maximum value D max. The relationship can be expressed as:
where PD is the measured optical power, D ∊ [0,D max] is the DMD level, T is the transfer coefficient that accounts for the losses in the optical setup. The ratio of the maximum power P max to minimum power P min corresponding to level D max and D min, respectively, gives the dynamic range (DR) of the projector.
The projector used in our experiment (see Section 3) is controlled with 8 bits image, so D min =1 and D max =255. We measured the optical power on the image starting from D min with an increment of 1 level until D max is reached. The dynamic range was measured based on the given definition to be 572.8 with projector in standard gamma setting mode. It is possible in some projectors to adjust the gamma setting to a more linear response function. In doing so, however, the dynamic range of the projector, and therefore the maximum achievable dynamic-range enhancement, would be reduced to 255. The corresponding response function shown in Fig. 5 reveals the non-linear gamma response of our projector. This result enables us to estimate the irradiance on the camera plane to be used in camera response function characterization process.
It should be noted that using this method to obtain irradiance value assumes uniform illumination across the field of view (FOV) on the image plane. However, a measurement of the FOV showed very small and slow variation across the field of view that we have neglected in this demonstration. The gamma curve of the projector can be changed to different settings (normal, natural, real and custom) to obtain different curve. In our demonstration, the standard setting (normal) is used and the device dynamic range calculations is based on the measurements in this mode. The use of different setting will result in changes to the gamma curve in Fig. 5.
2.3.2 Camera response characterization
The camera response function gives the relationship between the recorded digital values and irradiance on the camera pixels. Generally, the existing methods of deriving the camera response function from multiple exposure images follows three steps: (1) capture of the multiple exposure images of an HDR scene which provides an effective sampling of the camera response function, (2) derive the response curve fragments from different parts of the aligned exposure images, (3) fit the derived response curve fragments into a single smooth curve [9–11]. A method that retrieves the response function from a single image has been proposed . This method uses a spatially varying exposure mask to simultaneously sample the spatial and exposure dimension of the scene. One major advantage of this technique is the reduction in the number of captured images. Our approach exploits the multiple exposure capture and spatially varying illumination to characterize the response function. We exploit the spatial control capability of the DMD to generate a set of images that sample the camera response function at different exposures. This approach is fast, simple and eliminates the complex algorithms used in the existing methods.
Given our experimental setup in Fig. 2, we characterize the camera response function by utilizing the programmable capability of DMD to spatially modulate the illumination. This is achieved by applying a spatially varying light pattern with the DMD. The frame consists of patches with light intensity that increases monotonically from a set minimum to maximum value. Figure 6 shows a typical frame used in our characterization process. The frame is generated in Matlab (not gamma corrected) with DMD pixel values ranging from 22 to 200 in 8 bit levels. This translates to a total of 90 patches with a difference of 2 levels between patches. It should be noted that 22 and 200 are somewhat arbitrary, as any sufficiently large range of values could be used.
Starting with an initial exposure value e1, the corresponding image on the camera is captured and the intensity registered in each patch is obtained by averaging over 100 pixels as marked in Fig. 7. If the brightest patch in the captured frame is unsaturated, the exposure e1 is increased by a known constant factor to e2 and the spatial modulation pattern is reapplied to produce a new set of registered camera intensities. This process continues until saturation is detected in some of the patches at certain maximum exposure value.
In this context, we define the relationship between the registered camera intensities in each exposure sets and corresponding irradiance on camera pixels as:
where B denotes the registered camera level in each intensity patch, E is the camera exposure value for a given set and PD is the incident optical power corresponding to the applied spatially-varying DMD intensity pattern. Hence, the camera response function g relates the irradiance as a product of DMD element output power and camera exposure to the image brightness in each intensity patch (Fig. 7).
Figure 8 shows the results of using our algorithm and the experimental setup to determine the response function of our camera. In the demonstration, two camera frames corresponding to exposures e1 and e2 were required to complete the characterization process. Regions on the graph that represent the registered intensities from the two exposures are differentiated with markers. The lower part of the response (asterisks) is recovered with exposure e1 while the upper part of the curve (circles) corresponds to recovery with exposure e2. The interpolated curve between the registered data (shown in black line) gives the retrieved camera response curve. Based on the data in Fig. 8, the dynamic range of the camera was measured to be 250.
2.3.3 HDR calculations
Given the final compressed image, the camera response function g , the initial and final DMD intensity modulation array, the HDR radiance map of the acquired final image can be calculated in the following steps. For each camera pixel at position (x, y) in the final image:
- Obtain the relative output power from each element in the DMD array via the DMD response function shown in Fig. 5 as follows.
where f denotes the DMD response function and Dξηis the bit-setting of the DMD element in position ξη.
- Use the geometric mapping between the DMD and camera detector to obtain the relative illumination power PDxy on each pixel element xy of the camera.
- From Fig. 8 and inverting (3), convert the pixel value Bxy recorded in the camera to irradiance Rxy as follows.
- Thus, the recovered HDR radiance map is computed as the ratio of the transmitted DMD power to the irradiance detected at the camera:
3. Experimental results and discussion
Operation of our AFIC system is demonstrated in transmission mode by imaging a prepared microscope slide of a honeybee leg. The sample has a dynamic range considerably greater than our camera detector range. While the dynamic range of this specimen may not fully utilize the theoretical dynamic range enhancement (572.8), it demonstrates the application of our proposed method. As explained in Section 2.1, the summary of the process here is to modulate the DMD, capture the corresponding image, and if there is saturation in the image, the algorithm will recalculate and apply the new modulation pattern to the DMD until the final image is produced. The DMD light intensity for saturated pixels was reduced by 1/2 between iterations, and four iterations were necessary to produce the final image, with each image taken at a fixed exposure value of 150 milliseconds. The signal to noise of the final image and also of the HDR radiance map is improved by combining images taken in each iterative step. We obtain the value of a particular pixel in the recalculated final image by taking its average over all the images and ignoring any saturated pixel.
Figure 9(a) shows the initial image under maximum illumination (255), where limited camera dynamic range resulted into saturation in regions 1, 2, 3, and 4. From this image, the DMD-modulation pattern shown in Fig. 9(b) is computed. Application of this image to the DMD results in the recovery of some features in region 1 and 2 as shown in Fig. 9(c). The presence of saturation in this image requires the generation of another DMD modulation pattern shown in Fig. 9(d) and application to the DMD generates the image in Fig. 9(e). This figure shows a complete recovery of features in region 1, 2 and 3 with some saturation in region 4. Complete elimination of saturation in the final image (Fig. 9(i)) is achieved through application of the DMD modulation pattern shown in Fig. 9(h).
Figure 9(j) shows the calculated HDR data tone-mapped to 8-bit intensity levels for low dynamic range display purposes. The HDR data Hxy is calculated using the expression in Eq. (6), where E = 150 ms, and where Bxy and Dξη are represented in Fig. 9(g) and Fig. 9(h) respectively.
The histograms of the raw HDR data obtained from our method (AFIC) and traditional multiple exposure capture (MEC) are shown in Fig. 10. The dynamic range of the specimen is calculated to be approximately 1251 (ratio between maximum and minimum value in the HDR data), which is approximately 5 times larger than that of the digital camera alone. It should be noted that the calculations and results obtained in this demonstration are based on the measurements taken with projector gamma settings in standard mode. This is responsible for the theoretical maximum enhancement factor of 573 as obtained in the DMD characterization process. In practice, achieving this maximum enhancement factor is difficult. Apart from the specimen characteristics, camera noise and scattering from the projector and optics limits the achievable dynamic range enhancement factor.
The processing time in acquiring the final image depends on the number of iterations as well as the exposure time. In this case, with four iterations and exposure time of 150 ms, the final image was captured in total time of approximately 0.8 seconds with little time required to grab camera frame, calculate and apply the DMD modulation array.
Figures 11(a)–(e) shows the sequence of multiple exposure capture of the same honeybee leg taken at 9ms, 19ms, 38ms, 75ms, 150ms accordingly. These images are combined using the traditional multiple exposure algorithm  to generate the corresponding HDR data. The tone mapped image (8-bit intensity) of the HDR data is shown in Fig. 12.
The final HDR images and their corresponding histograms are quite similar for both the AFIC method and the multiple-exposure method. However, our system provides the flexibility in spatial control of the illumination in the field of view without changing the camera exposure. Changing the exposure setting will offer another degree of freedom when combined with the dynamic spatial illumination control.
Future work will involve further improvement on the factors that limit the achievable dynamic range of the system. One possibility is to numerically reduce the overlapping effect between the groups of camera pixels that corresponds to neighboring DMD elements. From the geometrical mapping operation, we note that a DMD element spreads into roughly a 4×4 group of camera pixels, which partially overlaps the group of pixels that corresponds to the neighboring DMD element. These overlapping pixels may sometimes be driven into saturation since their intensity contribution comes from more than one DMD elements. Under this circumstance, it is difficult to achieve the desired intensity for some camera pixels in the adaptive feedback algorithm. We alleviated this problem by lowering the desired intensity to where saturation is eliminated for these pixels. In principle, more elegant methods for managing this can be devised. This overlap, which results from the imperfections in the optical imaging system, can also be reduced by improvements in optical system design. Our use of an off-the-shelf projector in white light mode creates two problems. We have limited control over the internal optics; hence suffer significant aberrations that lead to overlap in the image plane. This is exacerbated by the use of broad spectrum illumination. A custom built imaging system operated with a monochromatic source could decrease overlap significantly.
Another possible improvement is to maximize the signal to noise ratio (SNR) of the final image, which requires a combined knowledge of the camera and DMD noise. Light scattering from the DMD chip and optical components of the projector results in the increase of the detected background noise, especially for low DMD levels. The effect of camera noise in our measurement was reduced by combining contributions from all images captured in the iteration process. Maximizing the SNR of the final image will depend on how successful the overlapping effect can be reduced. This process will allow the DMD intensity-modulation range to be maximized without resulting in saturation in the image.
We have presented an application of DMD to dynamic range enhancement of a digital optical microscope through an adaptive feedback illumination system. By achieving precise spatial control over DMD-pixel intensities at the illumination source, and thorough system mapping, we capture specimen features that extend beyond the dynamic range of the imaging system. Our demonstration in transmitted-light mode shows that using a precisely-controlled DMD in this way allowed for an imaging system with a dynamic range that was in principle 573 times larger than that of the digital camera. It is our hope that these techniques will provide a useful path forward for improved quantitative measurements through digital optical microscopy.
References and links
1. D. Dudley, W. M. Duncan, and J. Slaughter, “Emerging digital micromirror device (DMD) applications,” Proc. SPIE 4985, 14–25 (2003). [CrossRef]
2. A. L. P. Dlugan and C. E. MacAulay, “Update on the use of digital micromirror devices in quantitative microscopy,” Proc. SPIE 3604, 253–262 (1999). [CrossRef]
3. V. Bansal, S. Patel, and P. Saggau, “A High speed confocal laser-scanning microscope based on acousto-optic deflectors and a digital micromirror device,” in Proceedings of the IEEE conference on engineering in medicine and biology society (Institute of Electrical and Electronics Engineers, TX, 2003), 17–21.
4. P. J. Verveer, Q. S. Hanley, P. W. Verbeek, L. J. van Vliet, and T. M. Jovin, “Theory of confocal fluorescence imaging in the Programmable Array Microscope (PAM),” J. Microsc, A 189, 192–198 (1998). [CrossRef]
5. M. Liang, R. L. Stehr, and A. W. Krause, “Confocal pattern period in multiple-aperture confocal imaging systems with coherent illumination,” Opt. Lett. A 22, 751–753 (1997). [CrossRef]
6. E. Reinhard, G. Ward, S Pattanaik, and P. Debevec, High dynamic range imaging: acquisition, display, and image-based lighting (Morgan Kaufmann, 2006), Chap. 4.
7. S. K. Nayar, V. Branzoi, and T. E. Boult, “Programmable Imaging Using a Digital Micromirror Array,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (CVPR, 2004), 436–443.
8. S. K. Nayar and V. Branzoi, “Adaptive Dynamic Range Imaging: Optical Control of Pixel Exposures over Space and Time,” in Proceedings of IEEE International Conference on Computer Vision, (ICCV, 2003), 1168–1175. [CrossRef]
9. M. D. Grossberg and S. K. Nayar, “High dynamic range from multiple images: which exposure to combine?,” presented at ICCV Workshop on Color and Photometric Methods in Computer Vision (CPMCV), Nice, France, 11–17 Oct., 2003.
10. P. E. Debevec and J. Malik, “Recovering high dynamic range radiance maps from photographs,” in Proceedings of annual conference on computer graphics and interactive techniques, (ACM Press/Addison-Wesley Publishing, NY, 1997), 369–378.
11. T. Mitsunaga and S. K. Nayar, “Radiometric self calibration,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (CVPR, 1999), 374–380.
12. S. K. Nayar and T. Mitsunaga, “High dynamic imaging: spatially varying pixel exposures,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (CVPR, 2000), 472–479. [CrossRef]