Color enhancement of multispectral images is useful to visualize the image’s spectral features. Previously, a color enhancement method, which enhances the feature of a specified spectral band without changing the average color distribution, was proposed. However, sometimes the enhanced features are indiscernible or invisible, especially when the enhanced spectrum lies outside the visible range. In this paper, we extended the conventional method for more effective visualization of the spectral features both in visible range and non-visible range. In the proposed method, the user specifies both the spectral band for extracting the spectral feature and the color for visualization respectively, so that the spectral feature is enhanced with arbitrary color. The proposed color enhancement method was applied to different types of multispectral images where its effectiveness to visualize spectral features was verified.
© 2011 OSA
Multispectral imaging uses more than 3 spectral filters to capture images that include spectral information which is useful for remote sensing [1–3], color reproduction [4, 5], image analysis [6–9] and so on. High fidelity color reproduction [4, 5], which is difficult to accomplish with conventional RGB systems due to the limited information contained in RGB images, is made possible by using multispectral images of visible spectral range. Moreover, spectral color features that are invisible to the human eyes can be also captured and employed for object detection, recognition, or quantification. Color enhancement is an effective tool to explore the spectral features contained in multispectral images. For example, Gillespie et al. , Ward et al.  and others proposed color enhancement methods for multispectral images. In most cases, the enhancement results are pseudo-color images in which the natural colors of the objects are not preserved. However, the natural color of the objects is also important to interpret the spectral features when the multispectral image includes the visible spectral range.
Mitsui et al. [8,9] proposed a multispectral color enhancement method in which the enhanced results are overlaid to the original natural-colored images. In this method, the differences between the original multispectral image and its approximation by a few principal components at specified spectral bands are amplified. Then, the indiscernible spectral feature in the multispectral image is visualized without changing the average color distribution. However, sometimes the enhanced feature could not be observed, especially when the specified spectral band is not visually significant, for example, near ultraviolet or infrared. Also when an image has a large number of spectral bands, the enhanced results are not clear.
In this paper, we extended the conventional method  by modifying the visualization algorithm to effectively visualize the enhanced spectral features of a multispectral image, which could not be visualized well in the conventional method. In the proposed method, the user can specify the spectral band to extract the spectral feature and the color for visualization independently so that the desired spectral feature is enhanced with the specified color. This allows the enhanced spectral features to be visualized clearly even if the enhanced feature is in the invisible range or the image has a large number of spectral bands such like hyperspectral images. For such purpose, we present three methods to determine the color for visualization. In the experiment, we applied the proposed methods to various types of multispectral images such as a skin image, a microscopic image and a rice paddy image, and we have verified that the proposed method could effectively enhance the indiscernible spectral feature in the multispectral images.
2.1. Multispectral color enhancement
The color enhancement presented in this paper is mainly based on the method proposed by Mitsui et al. . This method enhances the color difference from dominant Karhunen-Loeve (KL) component without changing the color determined by the dominant component. The algorithm of the color enhancement procedure is shown in Fig. 1. First, a set of spectral data is extracted from the image in order to derive the dominant component. The data can be extracted from the entire image, or from part of the image (e.g. region of non-interest), depending on the requirement of the application. Then, a covariance matrix is derived from the extracted spectral samples to calculate for the KL basis vectors. The first few KL vectors are used to estimate the dominant component of the image.
In the N-band multispectral image, the enhanced signal value vector for j-th pixel g ej(N-dimensional vector) is represented as,Eq. (1). In addition, from the relationship of Eqs. (1), (2) and (3) we have, Eq. (5), the second term in the right-hand side is a constant vector. Thus, the spectral enhancement is easily derived by matrix multiplications and additions.
The enhanced multispectral image g ej is transformed into the spectral reflectance or transmittance by spectral estimation technique , and the color image is generated by using a color-matching function (CMF) such as CIE 1931 XYZ CMF, an illumination spectrum and a matrix for XYZ to RGB transform.
2.2. Modification of weighting factor matrix
In order to overcome the limitation of the conventional method, we extended the definition of the matrix W in Eq. (4) , such that the band at which to extract the spectral features and the color for visualization can be specified independently. In this paper, the modified version of the matrix W is called weighting factor matrix, whose q-th column vector is designed as follows;Eqs. (1) and (7), the spectrum (g d – g a) amplified by the residual component at each pixel is added to the original signal value g j. Setting the proper coefficient k allows the color of the enhanced region to change towards the target color determined by g d [Eq. (1)]. The spectral data of the background color, g a, can be the average spectral data of the entire image.
There are several approaches to determine g d, which is the spectral data of the target color, and we show in the following three possible methods.
Method I. In the first method, the relationship between the wavelength of the multispectral image and the color for the visualization is defined. Then the spectrum of the color assigned to the n-th band is derived by spectral estimation technique, and is used as g d when n-th band is specified for the enhancement. For example, hue between blue through red is assigned to the band between the shortest and the longest wavelengths of the multispectral image. In this method, the spectrum g d is calculated by employing a spectrum estimation technique as follows,
Method II. In the second method, arbitrary color or spectrum is specified based on a user’s intent. A user chooses the color for visualization with a tool like a color picker, then the spectrum corresponding to the chosen color is estimated using Eq. (8). In this case, (X d,Y d, Z d) is the tristimulus value transformed from the RGB vector of the color selected by user. If a user desires the color or the spectrum of a physical object as the enhanced result, the spectrum of the target object can be selected from a spectral image with a spectrum-picker tool.
Method III. Hue is a parameter in uniform color spaces such as HSV, HLS or CIE L*C*h, and the opposite hue in such color spaces means perceptual inverse. Using this feature, the spectrum for visualization, g d, can be determined from the hue distribution of an image. This method sets, g d, automatically using the average hue of an image, and it might be effective when pixels’ hues in the image are similar. The spectrum is calculated using , and which are the average values of L *, a * and b * in the entire image. The color which has opposite hue is represented as,
In the experiment, we applied the proposed color enhancement method to the multispectral images of a human skin captured by a filter-wheel multispectral camera , a pathological slide captured by a multispectral microscope  and a rice paddy image by hyperspectral imager mounted on a cargo crane .
3.1. Application to a skin image
In the application to the skin image, we used the image of a palm shown in Fig. 2 and tried to enhance the spectral features of several wavelengths including near-infrared with the method I. The palm image was captured by a 16-band multispectral camera, which has the center wavelengths and bandwidths of each spectral band shown in Table 1, and its image size was 1000 × 750 pixels, reduced and trimmed from the original 2048 × 2048 pixel image. It has been reported that melanine, capillary vessel and vein have spectral features in short, middle and long wavelength, respectively [Figs. 3(a), 3(b) and 3(c)]. For example, the long wavelength light penetrates relatively deeper and is affected by the absorption of the deoxy-hemoglobin in the vein, so the shape of the vein is enhanced when the 11th band is enhanced. Figure 3(d) shows that a band of longer wavelength is not well enhanced by the conventional method because the sensitivity of CMF in longer wavelength is small. In this experiment, the method I explained in the previous section was applied. Figure 4 illustrates the procedure of the method. Each wavelength in the visible range was assigned to every hue between blue (h start = 240°) through red (h end = 0°) in L*C*h color space, and the spectrum of the selected hue was used as g d. The spectrum corresponding to each hue was derived by applying spectral estimation technique, as follows; Lightness and chroma were held fixed and hue was changed at an interval of Δh, to calculate L *, a * and b * corresponding to each wavelength, and each a * and b * for each hue was calculated as,Eq. (8). During this step, hue is sampled at 16 degrees, namely h = 240,224,208,··· which corresponds to band 1, 2, 3, ··· and 16, of the 16-band multispectral camera used in our experiment. In addition, luminance and chroma were defined as L * = 50 and C * = 80. In this experiment n = 2,8,11,16, that is, we set h = 224°,128°,80°,0° as the hue respectively. The spectrum for visualization, g d, was calculated from one of these hues depending on which band is enhanced, and the average vector of the entire image was used as the background color g a.
The results of enhancing the skin image with the proposed method are shown in Fig. 5. In the results, the spectral features in 445 nm, 545 nm and 600 nm which were also enhanced by the conventional method as shown in Fig. 3, are visualized. Additionally, the spectral feature in 710 nm which were not visible with the conventional method was successfully enhanced and the structure of the vein is clearly observed. This result showed that the proposed method could visualize the spectral feature even in the invisible range. The artifacts on the edge of the fingers resulted from the motion of the object during the image capture using the filter-wheel multispectral camera.
Moreover, we evaluated these methods numerically by comparing the color differences between the normal skin regions and vein regions in the original image, the enhanced images with the conventional method and the proposed method when the 16th band is enhanced. The average CIE L*a*b* color differences between the normal skin regions and the vein regions are shown in Table 2. In the conventional method, the color difference between the two regions is almost the same as the original image. And the color differences in these results arise mainly from the luminance differences. However, in the proposed method, the color difference increases by comparing to that of the original image, especially Δa * is greatly changed. This indicates that the proposed method can enhance the image more effectively.
Scribner et al. [13, 14], Vilaseca et al. , and Jacobson et al.  have discussed the visualization of spectral features in the invisible range, but their results were mostly pseudo-colored images. Our enhancement method can keep the natural color of the background in the image, which could make it easier to see.
3.2. Application to a pathological image
In the application to the pathological image, we considered enhancing the fiber region in the 16-band H&E (Hematoxylin-Eosin) stained liver-tissue specimen image captured using a multispectral microscope  which has the spectral specification shown in Table 3. The fiber region is hardly differentiated in the H&E stained image shown in Fig. 6(a), hence MT (Masson-Trichrome) staining technique is normally used to see the fiber region as shown in Fig. 6(b). It has been reported that spectral imaging provides information for discriminating the fiber region in an H&E stained image . In this experiment, we applied color enhancement to the H&E stained image to clearly visualize the fiber region where the spectrum g d for the visualization was determined from the color of the MT stained fiber region according to method II. The size of the images used in the experiment is 2048 × 2048 pixels. 400 spectral transmittance samples, each of which is the pooled average of the pixels transmittance within a 5 × 5 pixel ROI, were obtained for the different tissue components such as the nucleus, cytoplasm, red blood cells, except the fiber, to generate the KL vectors for enhancement. The average of the spectral data was used as the background spectrum g a. The spectrum of the fiber region in MT stained specimen shown in Fig. 7(a) was employed for the spectrum g d for the visualization.
Here, color enhancement method was implemented in spectral transmittance space to remove non-uniformity in illumination. The spectral transmittance is calculated as follows,Figure 7(b) shows the average residual component (g j – s j) for the different tissue components when using six KL basis vectors. From Fig. 7(b), it is seen that the fiber region has a large residual at the 8th band. So we determined the band to be enhanced as n = 8. The resultant images of the color enhancement are shown in Fig. 8. Because of the shape of the H&E stained transmittance of fiber, Fig. 7(a), the color hardly changes even if the spectral transmittance in 8th band is enhanced. While the enhanced features are not readily visible in the conventional method, the fiber region in the H&E stained image was enhanced to blue, which is similar to its color in the MT stained image, in the proposed method. Since the color is visualized similar to that of MT stained tissue specimen, it would be easier for pathologists to evaluate the result by comparing it with the conventional physical staining technique.
Method III was also applied to the same H&E stained pathological image. In method III the spectrum g d for the visualization is determined automatically based on the hue in CIE L*C*h color space. First, the average L *, a * and b * are calculated from the average spectrum of an image. Then, as written in Eq. (10), and are transformed into and which indicate opposite hue or complementary color. Finally, they are transformed into XYZ tristimulus values and the spectrum g d is derived by Eq. (8). In this method, the spectrum g d has the opposite hue to the average hue of the entire image, regardless of the enhanced band n.
We determined the band to be enhanced as n = 8 again. The color g d for visualization and the background color g a were calculated automatically. It is believed that the enhanced result is improved by changing the luminance of the spectrum g d in cases when the luminance of the entire image is high. So additional enhancement processing was also performed, where L * for the spectrum g d was set as L * = 50. In the enhanced results shown in Fig. 9, the fiber regions are enhanced with a green color which is the perceptual opposite color to the average color of the H&E stained image. When the hues of all pixels in the entire image are similar, as in the present case, automatic color determination enables effective enhancement without intricacy of selecting the spectrum for visualization.
Table 4 shows the average color differences between the cytoplasm and fiber region, which are both stained with eosin in an H&E stained image. In this table, Method III (a) and (b) correspond to the results shown in Figs. 9(a) and 9(b), respectively. Both of these proposed methods resulted to larger color differences than the conventional method which indicate their effectiveness.
3.3. Application to a hyperspectral image
In the conventional method, when an image has a large number of bands, such as hyperspectral images, amplified value in enhanced band is not clearly visualized because the impact of amplifying a single band is small. However, the proposed methods are effective on such images. In our experiment we applied the method II to the hyperspectral image of a rice paddy, shown in Fig. 10, and explored spectral features by observing enhanced results. The image was obtained by using a cargo crane with the hyperspectral sensor, ImSpector V10 made by Specim Co., which has 121 bands from 400–1000 nm, 3 nm of spectral resolution, and 5 nm of sampling interval . However, the components of the longer wavelengths after 900 nm were not used as they include much noise. Each pixel value in the image was transformed into spectral reflectance with reference to the pixel value of the standard white board in the same image.
The hyperspectral image in Fig. 10 mainly consisted of crop, weed and soil, and we investigated their spectral features using the color enhancement in method II. KL basis vectors were generated from the region of the image where weed and soil are included. However, the region extracted for spectral samples consisted mainly of weeds, hence we assumed that one KL vector was sufficient to estimate the spectra of weeds. In this case, we used the first KL vector. The spectrum g d is the spectrum of magenta obtained from the Macbeth Color Checker image captured by the same hyperspectral camera. Given this condition, we enhanced the rice paddy image in the bands from 500–900 nm at 50 nm sampling interval.
The enhanced hyperspectral images are shown in Figs. 11 and 12. In Fig. 11(b), the weed region and part of the crop region are enhanced with magenta color. Because only one KL basis vector was used and the spectra of the weed region vary widely, the weed region, which is used for generating basis vectors, was also enhanced. The average residual components of the different regions in the rice paddy image are shown in Fig. 13. In Fig. 13, ”Crop 1” represents the residual of the crop region which is not enhanced in Fig. 11(b), and ”Crop 2” represents the enhanced crop region. The spectral variations in crop regions are mainly due to the difference in their illumination condition such as shading. As shown in Fig. 13, the soil region has a large residual component around 700 nm, and the crop region has large residual error around 800 nm which could correspond to the biomass content . The enhanced spectral features between these wavelengths are shown in Figs. 11(e) and 12(b) as enhanced regions. Figure 14 shows the magnified part of the rice paddy image whose spectral feature at 700 nm and 725 nm was enhanced. We can see better contrast between the leaves and the crops in Fig. 14(b). This is due to the negative residual component of the crop region at 725 nm (Fig. 13). The original spectral data of each region are shown in Fig. 15. Here we see that the spectra of crop and weed regions have large difference in 680–750 nm, called ”red edge”, which originated from the spectral feature of chlorophyll, which is not observed in the soil region. Furthermore, the residual components in the crop regions are due to the differences of their spectral shapes in the near-infrared wavelength. As the above results show, the salient spectral features in the hyper-spectral image of rice paddy were successfully visualized by the proposed color enhancement, and such features can be applied to discriminate each region. Further investigation of spectral features in hyperspectral images using the proposed enhancement method could result to a new index for advanced vegetation analysis.
This paper proposes a method for the effective visualization of the enhanced spectral features, in which the design of a weighting factor matrix is modified so that the enhanced feature appears with arbitrary color. Some examples on the methods to determine the color for visualization are also presented. Even if an image has a salient spectral feature in the invisible wavelength range or has a large number of spectral bands, the spectral feature can still be enhanced and effectively visualized with the proposed method. The method will be useful in exploring the spectral features masked in multispectral or hyperspectral images.
The authors greatly acknowledge Dr. Yukako Yagi in Harvard Medical School, Boston, MA, U.S. for helpful advices and discussion.
References and links
1. Z. Lee, K. L. Carder, C. D. Mobley, R. G. Steward, and J. S. Patch, “Hyperspectral remote sensing for shallow waters. I. A semianalytical model,” Appl. Opt. 37, 6329–6338 (1998). [CrossRef]
2. J. A. Gualtieri and R. F. Cromp, “Support vector machines for hyperspectral remote sensing classification,” Proc. SPIE 3584, 221–232 (1999). [CrossRef]
3. B.-C. Gao, M. J. Montes, Z. Ahmad, and C. O. Davis, “Atmospheric correction algorithm for hyperspectral remote sensing of ocean color from space,” Appl. Opt. 39, 887–896 (2000). [CrossRef]
4. M. Yamaguchi, T. Teraji, K. Ohsawa, T. Uchiyama, H. Motomura, Y. Murakami, and N. Ohyama, “Color image reproduction based on the multispectral and multiprimary imaging: Experimental evaluation,” Proc. SPIE 4663, 15–26 (2002). [CrossRef]
5. J. Y. Hardeberg, F. Schmitt, and H. Brettel, “Multispectral color image capture using a liquid crystal tunable filter,” Opt. Eng. 41, 2532–2548 (2002). [CrossRef]
6. A. R. Gillespie, A. B. Kahle, and R. E. Walker, “Color enhancement of highly correlated images. I. Decorrelation and HSI contrast stretches,” Remote Sens. Environ. 20, 209–235 (1986). [CrossRef]
7. J. Ward, V. Magnotta, N. C. Andreasen, W. Ooteman, P. Nopoulos, and R. Pierson, “Color enhancement of multispectral MR images: Improving the visualization of subcortical structures,” J. Comput. Assist. Tomogr. 25, 942–949 (2001). [CrossRef] [PubMed]
8. M. Mitsui, Y. Murakami, T. Obi, M. Yamaguchi, and N. Ohyama, “Color enhancement in multispectral image using the Karhunen-Loeve transform,” Opt. Rev. 12, 69–75 (2005). [CrossRef]
9. M. Yamaguchi, M. Mitsui, Y. Murakami, H. Fukuda, N. Ohyama, and Y. Kubota, “Multispectral color imaging for dermatology: application in inflammatory and immunologic diseases,” in Proceedings of 13th Color Imaging Conference (Society for Imaging Science and Technology/Society for Information Display, 2005), pp. 52–58.
10. Y. Murakami, T. Obi, M. Yamaguchi, N. Ohyama, and Y. Komiya, “Spectral reflectance estimation from multi-band image using color chart,” Opt. Commun. 188, 47–54 (2001). [CrossRef]
11. P. A. Bautista, T. Abe, M. Yamaguchi, and N. Ohyama, “Multispectral image enhancement for H&E stained pathological tissue specimens,” Proc. SPIE 6918, 691836 (2008). [CrossRef]
12. N. Kosaka, K. Uto, and Y. Kosugi, “ICA-aided mixed-pixel analysis of hyperspectral data in agricultural land,” IEEE Trans. Geosci. Remote Sens. 2, 220–224 (2005). [CrossRef]
13. D. Scribner, P. Warren, J. Schuler, M. Satyshur, and M. Kruer, “Infrared color vision: an approach to sensor fusion,” Opt. Photon. News 9, 27–32 (1998). [CrossRef]
14. D. Scribner, P. Warren, and J. Schuler, “Extending color vision methods to bands beyond the visible,” Machine Vision Appl. 11, 306–312 (2000). [CrossRef]
15. M. Vilaseca, J. Pujol, M. Arjona, and F. M. Martínez-Verdú, “Color visualization system for near-infrared multispectral images,” J. Imaging Sci. Technol. 49, 246–255 (2005).
16. N. P. Jacobson and M. R. Gupta, “Design goals and solutions for display of hyperspectral images,” IEEE Trans. Geosci. Remote Sens. 43, 2684–2692 (2005). [CrossRef]
17. P. A. Bautista, T. Abe, M. Yamaguchi, Y. Yagi, and N. Ohyama, “Digital staining for multispectral images of pathological tissue specimens based on combined classification of spectral transmittance,” Comput. Med. Imaging Graph. 29, 649–657 (2005). [CrossRef] [PubMed]
18. S. Itano, T. Akiyama, H. Ishida, T. Okubo, and N. Watanabe, “Spectral characteristics of aboveground biomass, plant coverage, and plant height in Italian Ryegrass (Lolium multiflorum L.) meadows,” Grassland Sci. 46, 1–9 (2000).