Abstract

This paper presents a prototype of a spectral video system based on hybrid resolution spectral imaging. The system consists of a commercial three-channel color camera and a low-resolution spectral sensor which captures a 68-pixel spectral image by a single snap-shot. By combining the measurement data from both devices, the system produces high-resolution spectral image data frame by frame. The accuracy of the spectral data measured by the system is evaluated at some selected regions in the image. As a result, it is confirmed that spectra can be measured with less or around 10% of normalized root mean squared error. In addition, the capture of spectral videos in 3 frame-per-second and the real-time color reproduction in the same frame rate from the spectral video are demonstrated.

© 2014 Optical Society of America

1. Introduction

Spectral imaging techniques are utilized as an effective tool to measure and analyze targets noninvasively. In addition, the utilization of spectral images has been expected to expand over the field of color image reproduction required for high fidelity [15]. Traditional devices for spectral image acquisition often include data scanning along either the wavelength or spatial axis, and their application to video capture is challenging. Although some techniques for spectral video acquisition in time-sequential manner have been reported [6,7], snap-shot imaging is more convenient, where snap-shot means a spectral image can be captured by a single-frame exposure [6, 813]. In order to utilize spectral imaging techniques in wider application fields including video, it is expected to realize systems with the properties of compactness, easy-handling, and snap-shot capturing.

There have been various attempts to realize snap-shot spectral imaging. Snap-shot spectral imaging was demonstrated based on Fourier transform spectroscopy [8] or by using the combination of multiple dichroic mirrors [9]. A six-band video system was developed consists of two color video cameras and optics for color separation [10]. A system for low-cost snap-shot multispectral imaging was proposed based on an image duplicator and color filters [11]. Snap-shot spectral imaging was developed based on a specially fabricated optics called image mapper [12]. The methods listed above are all promising to realize snap-shot spectral imaging. However, they still involve the trade-off relationship between the power of incident light and the spectral/spatial resolutions because the light power is shared both spatially and spectrally.

Recently, snap-shot spectral imaging methods based on a compressive sensing theory have been proposed [1317]. These methods enables reconstructing a high resolution image from much smaller number of the actual measurement data than that of the direct measurement of full spectral image data. However, high calculation cost to reconstruct full resolution spectral images is one of the practical problems.

As one of the other approaches to overcome the resolution-throughput tradeoff in the field of remote sensing, hybrid approach has been introduced: several image data with different resolution properties are captured and they are merged by image fusion techniques [18]. Typical combination is high-resolution panchromatic image data and low-resolution multispectral image data, which are merged to produce high-resolution multispectral image data. Each of measurements only requires high resolution in either of spatial and spectral directions. Therefore, the implementation of the measurement systems should be easier without considering spatial-spectral trade-off. Such hybrid approach has been recently introduced to multispectral imaging in the fields other than remote sensing [1922]. In these researches, a combination of high-resolution three-channel color image data and low-resolution spectral or multi-band image data were used. Especially in [21,22], spectral video capture was demonstrated.

We have studied about the same approach in a more general manner [2326], which is called hybrid-resolution spectral imaging (HRSI). In this method, spatially high- but spectrally low-resolution data and spectrally high- but spatially low-resolution data are measured, and they are combined to produce spectral image data of high-resolution in both spatial and spectral directions. In addition, several methods have been proposed to reconstruct spectral images from a combination of data, and the effectiveness of the approach has been verified through experiments. However, there has yet been no report on spectral video systems based on the proposed HRSI. Thus, in this paper, we have constructed a prototype system to demonstrate the spectral video capture, though the frame-rate is 3 frame-per-second (fps) to measure the accuracy of the system experimentally, and to clarify the problems of the system.

The systems proposed in this paper and in [21,22] are based on the same concept that high-resolution spectral videos can be produced by combining two video streams with different resolution properties. The systems presented in [21,22] used a beam splitter to simultaneously capture high-resolution RGB and low-resolution spectral images. In the spectral imager, a fixed mask [21] or a content adaptive mask [22] was used, and the reconstruction method was implemented based on bilateral interpolation filtering approach. The reconstruction method was specially designed for this spectral sensor, supposing that the pixel aperture size of the low-resolution spectral sensor was the same as the high-resolution RGB camera. Therefore, the reconstruction method is not suitable for the cases when the pixel size of spectral data is larger than that of high-resolution RGB images as in the case of our LRSS. On the other hand, the reconstruction theory employed in this paper is based on [2326], which does not have such limitation and cover the systems consisting of various types of measurement devises. In addition, the accuracy of the registration between the RGB camera and the LRSS is not so critical in the reconstruction method used in this paper as compared with the bilateral interpolation approach, because the spectral data from the LRSS are used for deriving the spectral basis functions or the correlation matrix in the corresponding region, whereas the low-resolution spectral pixel and the RGB pixel must be precisely aligned in the interpolation-based approach. Therefore, it becomes possible to capture those two different resolution images with two different imaging lens systems, and a simple and compact system can be realized. Thus the method can also be applied to various different high-resolution cameras, by combined use of the LRSS.

This paper reports a prototype of a HRSI system which realizes spectral video capture. The prototype system consists of a commercial color camera and a device called low-resolution spectral sensor (LRSS) [26] which captures a 68-pixel spectral images by a single snap-shot. In the remainder of this manuscript, we describe the principle of the HRSI, the design and calibration of the LRSS, the configuration of the prototype system, and the experiments which demonstrate the accuracy and feasibility of the prototype system.

2. Principle of hybrid-resolution spectral imaging

It is well known that spectral images include large redundancy both spectrally and spatially; therefore, spectral images can be highly compressed with small information loss. If we can directly capture compressed data of a spectral image, the original spectral image can be reconstructed from the small amount of measurement data by signal processing with small error. If we can take such an approach, the amount of physical measurement can be reduced, and efficient spectral imaging becomes possible. The methods based on a compressive sensing theory [1216] are based on this concept. In addition, we implemented the similar concept in a different way as HRSI.

The physical measurements of HRSI consist of spatially high- but spectrally low-resolution data and spectrally high- but spatially low-resolution data. Then, a spatially and spectrally high-resolution spectral image is reconstructed from a combination of these two different types of data. A schematic diagram of the HRSI is shown in Fig. 1. In this figure, the former measurement is assumed to be high-spatial-resolution RGB image data, and the latter is called to low-resolution spectral data for simplicity.

 

Fig. 1 Conceptual diagram of hybrid resolution spectral imaging.

Download Full Size | PPT Slide | PDF

We have proposed several methods to reconstruct high-resolution spectral images [2325]. Let us explain the common idea of these methods. In the following explanation, it is assumed that the number of the spectral samples for high-resolution image data is M (M = 3 in the case of a conventional color camera). An important assumption in these reconstruction methods is that the spectra of a scene can be represented by a linear combination of R spectral basis functions:

f(i,j)=r=1Rαr(i,j)φr=[φ1φR][α1(i,j)αR(i,j)]=Φα(i,j),
where f(i,j) and φr are L-dimensional column vectors representing a spectral radiance function of a pixel (i,j) and spectral basis function respectively, αr(i,j) is a weighting coefficient, and L is the number of samples in wavelength. Obviously, arbitrary scene does not satisfy this assumption for a fixed R. However, if we appropriately divide images into small regions, this assumption can be satisfied in each small region for a small R. Let Φk be a L×R matrix of a set of spectral basis functions representing the spectra included in kth small region.

If we can assume the linearity of the high-resolution camera system, the image signals g(i,j)(M-dimensional column vector) is modeled by

g(i,j)=Hf(i,j),
where H is a M×L matrix representing the spectral sensitivity of the camera. If a pixel (i,j) is included in the kth region, substituting Eq. (1) into Eq. (2), we have
g(i,j)=[HΦk]α(i,j),
where the matrix HΦk is a M×R matrix. If MR and the rank of HΦkis larger than R, we can derive α(i,j) from g(i,j) by using the pseudo inverse matrix of HΦk, [HΦk]. As a result, the spectral data can be reconstructed as
f^(i,j)=Φk[HΦk]g(i,j)=Akg(i,j),
where Ak is a L×M matrix used for spectral reconstruction of kth region. To realize this, we need to know H and Φk. The matrix H, spectral sensitivity of the RGB camera, can be measured in advance. On the other hand, matrix Φk, a set of spectral basis functions, can be derived from the low-resolution spectral measurements measured in or around kth small region.

In the actual reconstruction, it is important to consider about the noise on image signals. In addition, the number of spectral basis functions calculated from the low-resolution spectral data is not necessarily less than M. Therefore, instead of calculating Φk[HΦk] directly, it would be better to calculate spectral reconstruction matrices based on Wiener theory as

Ak=CkHT(HCkHT+CNoise)1,
where Ck is a L×L correlation matrix calculated from the low-resolution spectral data measured in or around kth region and CNoise is a 3×3 noise correlation matrix. It should be noted that Ak of Eq. (5) becomes equal to Φk[HΦk] when CNoise is a zero matrix and the rank of the matrix HCkHT is R, which is proved by using the relation of Ck=ΦkDΦkT, where D is a R×R diagonal matrix with non-zero elements on its diagonal. In practice, the noise can often be considered as uncorrelated between different channels. Moreover, if the noise levels in all channels are almost equal, then CNoise can be represented as an identity matrix multiplied by a scalar factor. Then the factor is estimated from the noise variance that is measured in advance. In the experiment presented in this paper, the factor was zero since the noise level is small enough.

The accuracy of the reconstruction depends on how to divide images into small regions. The methods proposed so far adopt different ways to divide images [2325]. One of the simplest methods does not divide images and derive a single reconstruction matrix for a whole image; we call it Wiener estimation [27]. A more sophisticated approach is piecewise Wiener estimation method [24]. The schematic diagram is shown in Fig. 2. In this method, a high-resolution RGB image is divided into blocks, and reconstruction matrices are assigned to respective blocks. The reconstruction matrix of each block is derived by Eq. (5), where the correlation matrix is calculated from the spectra weighted according to the distance d between the block center and the measurement position of each of low-resolution spectral data. The weight is determined by ρd, the ρ is a parameter that determines the dependence of the weight upon the distance d. The weight is large for the spectral measurement near the block center; the spectral measurements distant from the block are ignored to derive the reconstruction matrix of the block. If ρ = 1, there is no spatial dependence and the method is equivalent to the conventional Wiener estimation. The number of the blocks is recommended to be nearly equal to the number of the pixels of low-resolution spectral data in the original paper [24]. To remove the artifact which can appear at the boundary of the blocks, the blocks are defined such that they are partially-overlapped with a Hanning window function.

 

Fig. 2 Processing flow of piece-wise Wiener estimation method.

Download Full Size | PPT Slide | PDF

Although there are other methods which prepare reconstruction matrices per pixel [23] or per group of pixels with similar set of RGB image signals [25], we implemented Wiener estimation and piecewise Wiener estimation methods in the prototype system described in this manuscript.

3. Low-resolution spectral sensor

In order to realize a spectral video system based on hybrid-resolution approach, we need a device which captures low-resolution spectral data in a short exposure time. For this purpose, we developed a device called LRSS which captures 68-pixel spectral images by a single exposure. Although the configuration of the LRSS is basically same as a previously reported device for microscopes [28], the small camera head of the LRSS in this report was redesigned to be appropriate for macro imaging.

LRSS was built up by connecting the following components in a single optical axis: an imaging lens, an optical fiber array that converts two-dimensional to one-dimensional arrangement, a direct sight imaging spectrograph (ImSpector; Specim), and a monochrome CCD camera (GS2-FW14S5M; Point Grey Research, Inc.) (Fig. 3). One end of the fiber bundle of a two-dimensional arrangement is positioned on the image plane of the imaging lens, and samples a scene image at 68 positions. The other end of the fiber is attached on the slit in a one-dimensional arrangement, and the imaging spectrograph produces the spectral image of the incident light from the slit, which is captured by the CCD camera. Then, a single-frame image of CCD output includes the spectral data of 68 positions of a scene. The photograph of the LRSS is shown in Fig. 4.

 

Fig. 3 Schematic diagram of LRSS.

Download Full Size | PPT Slide | PDF

 

Fig. 4 Photograph of LRSS.

Download Full Size | PPT Slide | PDF

In order to obtain calibrated spectral data from the CCD output of the LRSS, we have to know (i) the relationship between the vertical pixel position and the wavelength of spectra and (ii) the spectral sensitivity function throughout the system. Both are required to know per fiber. The relationship of (i) was represented by a quadratic function considering the barrel distortion of the imaging spectrograph; the parameters were determined for each line corresponding to each fiber using twelve emission lines of a fluorescent lamp. The spectral sensitivity of (ii) was determined by comparing the measurement data of a tungsten lamp with a commercial spectrophotometer (SR-3; Topcon), where the relative spectral sensitivity functions corresponding to the all fibers were supposed to be identical. The sensitivity ratio among the 68 fibers was measured by using a spatially uniform illumination.

Figures 5(a)-5(c) show the data obtained through the calibration procedure: (a) the relationship between the pixel position and wavelength for the fiber #1 as an example, (b) the common spectral sensitivity function for all the fibers, and (c) the sensitivity ratio among the fibers. As shown on the graph (a), the relation between pixel position (x) and wavelength (y) was obtained as a quadratic function (in the case of fiber #1, y=0.0001x2+0.8245x+393.44) per fiber by curve fitting. By using these calibration data, we can obtain calibrated 68 spectral data from a single frame of CCD output as follows [27]; First, non-calibrated spectral data are read out as 68 vertical line data. Second, the read-out data are transformed to spectral data by using calibration data (a) per line. Finally, spectral sensitivity correction is performed by using (b) and (c) per fiber, then we obtain calibrated 68 spectra.

 

Fig. 5 Calibration data of LRSS: (a) relationship between pixel position and wavelength for fiber #1, (b) common spectral sensitivity for all fibers, and (c) sensitivity ratio among fibers.

Download Full Size | PPT Slide | PDF

The accuracy of the LRSS was evaluated by measuring the spectra of a light-emitting diode (LED) lamp and an artificial sun light (SOLAX; SERIC Ltd.). Figure 6 shows the comparison between the spectra measured by LRSS and SR-3 for two light sources. The two measured spectra are almost overlapped. The normalized root mean squared error (NRMSE) was calculated in the 400-780 nm range; NRMSE is 1.4% and 2.8% for LED and SOLAX, respectively. From these results, it was confirmed that LRSS accurately measure the spectra. In addition, the spectral resolution was measured by using a He-Ne laser light source. At the wavelength 633 nm, the full width at half maximum is 3.73 nm. This result is not mainly attributed to the resolution of the CCD camera but to the imaging spectrograph specification.

 

Fig. 6 Accuracy evaluation of spectra measured by LRSS by compared to the spectra measured spectrophotometer SR-3 for (a) LED lamp and (b) artificial sun light.

Download Full Size | PPT Slide | PDF

4. Hybrid-resolution spectral imaging system

A HRSI system was constructed using a commercial color CCD camera (FL2G-13S2C; PointGrayResearch Inc.) and the LRSS. The resolution of the color camera is 1280 × 960 pixels and the bit depth is 12 bits. The focal length of the imaging lens is 16 mm. The color camera and the LRSS are arranged side-by-side to image the same scene. They are connected to a personal computer (PC) with Intel Core i7-4770 CPU 3.40GHz and 16GB RAM, through IEEE 1394b, which transfers data from two cameras in synchronization. Figure 7(a) shows the photograph of the experimental setup when a color chart is captured by the system.

 

Fig. 7 (a) Photograph of when prototype system capture image of color chart. (b) The spectral sensitivity of the RGB color camera.

Download Full Size | PPT Slide | PDF

The two cameras have different sizes of field of view: the high resolution RGB camera has a wider field of view than LRSS does, as shown in Fig. 8. Therefore, it is enough to roughly align the two cameras; and then, the corresponding region is searched from the high-resolution RGB image by using a chart prepared for alignment. The corresponding region is approximately 640 × 720 pixels in a high-resolution RGB image. Although this registration procedure was done manually in the experiments shown in this paper, automatic registration would not be so difficult by means of conventional methods such as template matching [29]. Depending on the distance between target objects and the cameras, the size of a corresponding region does not change, while the location of the region changes.

 

Fig. 8 Example set of images captured by high-resolution RGB camera and LRSS.

Download Full Size | PPT Slide | PDF

In the prototype system. Two video streams are transferred to the PC and processed by the software per frame. From the CCD output image of the LRSS, the calibrated 68 spectral data are derived, and spectral image reconstruction is performed via Wiener or piecewise Wiener estimation method. The derived spectral image has 640 × 720 spatial resolution and 381 spectral bands in the range of 400 nm and 780 nm with 1 nm increment. Thus the spectral resolution is equivalent to that of LRSS, i.e., 3.73nm.

For the display of spectral video, we implemented real-time image conversion that generates a gray-scale image of a selected wavelength, a true-color image under a certain illuminant, and a false-color image in which images of selected wavelength are assigned to RGB channels. Since they can be computed by a simple matrix operation and a one-dimensional look-up table as explained below, the computation cost is relatively small and a real-time implementation is not difficult.

A single-wavelength image is obtained by multiplying a delta-like 1×L vector Bl, in which only a single element that corresponds to the selected wavelength equals one and other elements are all zero, to every pixel of spectral image as

fl(i,j)=BlAg(i,j),
where fl(i,j) is a scalar representing the spectral data of corresponding pixel of wavelength l. Since the term BlA is an 1×3 vector, actual calculation for reconstructing a single-wavelength image includes only a multiplication between two three-dimensional vectors (BlA and g(i,j)) per pixel.

In a similar manner, in the case of color image reproduction based on spectral information, a 3×L matrix Bc is prepared instead of Bl. For the true color display, the three rows of Bc represent the color matching functions (CMFs) of human vision. To display the color under different illuminant, the spectra of image capture and the display environment are prepared in advance, and Bc is derived as

Bc=CEREI1,
where EI and ER are L × L matrices where their diagonal elements represent the illumination spectra of image capture and display environment, and a 3 × L matrix C denotes the CMF of human vision. Then a 3 × 3 matrix BcA is multiplied to every pixel, followed by one-dimensional look-up tables for the display gamma correction. If Bc is replaced with a matrix in which each row is a delta-like vector similar to Bl, the false color display can be implemented. In addition to color reproduction or false color display, spectral enhancement results [30] or other visualization results based on spectral video data will be implemented in a similar manner.

Instead of saving reconstructed full spectral video stream, this system stores the two video streams of RGB image and LRSS. This is because the two video streams correspond to the compressive data of a full spectral video stream. In addition, by storing the original two video streams, it becomes possible to apply other reconstruction methods afterward.

5. Experiments

Spectral accuracy of the HRSI system was evaluated by using a color chart and flowers. In addition, the spectral video reproduction was demonstrated by capturing a human hand. The spectral sensitivity of the RGB camera was calibrated in advance [Fig. 7 (b)]. The artificial sun light (Fig. 6(b)) was used for illuminating the subjects.

5.1 Spectral accuracy evaluation

A color chart (shown in Fig. 7) and flowers were captured by the developed HRSI system for evaluating the spectral accuracy. Two color patches of the color chart were captured in a frame, and the spectral images for two different color combinations (green & yellow; blue & red) were acquired. The flowers scene consisted of a red rose and yellow carnations.

The original measurement data by the RGB camera and LRSS are shown in Fig. 9. We can see two dominant groups of spectra and their mixture in the LRSS measurement of the color chart. For instance, on the graph of green & yellow, a group of the upper curves comes from yellow patch, a group of the lower curves comes from green patch, and the four intermediate curves are their mixture. The mixture were observed at the boundary of two colors, because the resolution is low and the pixel aperture size is large in the LRSS as shown in bottom-left of Fig. 3. The rectangles on the color images indicate the regions used for the evaluation of spectral accuracy described below. The reconstruction of the spectral images was performed by Wiener estimation because the number of the colors in a frame is less or around the three in this experiment, and it is expected that there is no need to switch reconstruction matrices according to location by applying piecewise Wiener estimation. In this case, the reconstruction is based on Eq. (5) with removing region index k.

 

Fig. 9 Native measurement data by RGB camera (top) and by LRSS (bottom) for green & yellow (left), blue & red (middle) color patches of color chart and flowers (right). The rectangles on color images indicate the areas for accuracy evaluation.

Download Full Size | PPT Slide | PDF

In the scene of flowers, the variation of colors was more complex, and we applied piecewise Wiener estimation. In this case the high-resolution RGB image was divided into 9 × 9 small regions, namely k = 1 ... 81. It was determined as recommended in [24]. When calculating the correlation matrix Ck, the weight explained in Section 2 was applied, where ρ was empirically determined in this experiment. Since ρ ≠ 0, all the spectral data from the LRSS was used in the calculation of every Ck. We did not need Φk and R, because Eq. (5) was used in the reconstruction. The spectral accuracy depends on where ρ was and its optimal value also depends on the scene. Thus the method to determine appropriate where ρ value is one of the future issues in this method.

After the spectral image was reconstructed, a color image under standard illuminant was also generated using Eq. (7). In this case, the illuminant spectrum of the artificial sun light lamp was measured by the LRSS using a standard white reference. The spectrum of CIE (Commission Internationale l'Eclairage) standard illuminant D65 was assumed for the image reproduction. The resultant color image after applying gamma correction (gamma = 2.2) is shown in Fig. 10(a). The high-resolution color image under different illuminant was successfully generated using the LRSS data. Figure 10(b) shows the spectra in the pixels along the vertical line between the triangle marks at the top and the bottom of the image (a). The vertical location is aligned to the image in (a). From the top, it shows the spectra of white background, red flower petals (front and back faces), yellow flower petals, light-green sepals, green stems and leaves at the bottom. (c) illustrates the resolution of the LRSS. Though it is not quantitative, it can be confirmed at least that the reconstructed spectral image has substantially higher spatial resolution than the LRSS, while there appears no difference in spectral resolution.

 

Fig. 10 (a) The color image under CIE D65 illuminant calculated from the spectral image of flowers. The spectral image was reconstructed by the piecewise Wiener estimation using the spectral data obtained from the LRSS. (b) The reconstructed spectral radiance of the pixels along the vertical line between the triangle marks indicated at top and bottom of the image (a). (c) is the spectral radiance of the nearest fiber of the LRSS, at the pixels along the same vertical line as (b). In (b) and (c), the vertical axes indicate the corresponding pixel location in the vertical directions, and the horizontal axes designate the wavelength. The gray levels indicate the radiance values. The spatial resolution in (b) significantly higher than (c) is obtained (vertical direction), while the spectral resolution is the same (horizontal direction).

Download Full Size | PPT Slide | PDF

For evaluating spectral accuracy quantitatively, the averaged spectra in the small regions (black rectangles in Fig. 9) are calculated, and compared to the spectra measured at the corresponding position by SR-3. As for a reference, the spectra were also estimated from only RGB image data. In this case, spectra were estimated by Wiener estimation, but the spectral correlation matrix is generated based on a Markov model [23] without using LRSS data. Figure 11 shows the estimated and measured spectra in the case of flowers. The spectra obtained by the proposed system are almost overlapped to the spectra measured by SR-3. NRMSEs were shown in Table 1. It was confirmed that proposed system realized less than 10% NRMSE except for the blue color patch, while larger percentages of NRMSE occurred in the estimation from RGB image only. We can see the negative power in the estimated spectra from RGB image only on the right graph, which is caused by the estimation error. The reason why the error becomes large for blue in the proposed system is thought to be the fact that the reflectance is low and the influence of noise on the measurement data becomes large. It should be noted that the NRMSE includes the difference caused by the difference in the spectral resolution between LRSS and SR-3.

 

Fig. 11 Estimated and measured spectra of carnations (left) and rose (right).

Download Full Size | PPT Slide | PDF

Tables Icon

Table 1. Normalized Root Mean Squared Error of Spectra

5.2 Spectral video capture and reproduction

The capture of a spectral video was demonstrated. The illuminance was approximately 700 lx and the subject was a human hand. In order to obtain adequate signal levels in the measurement data, exposure time is around 250 ms for the LRSS. The color image calculated based on the spectral image was displayed at approximately 3 fps. This frame rate is mainly restricted by the long exposure time of the LRSS, which is discussed in later. Figure 12 shows the screen capture of the display when the human hand was measured. For the calculation of the spectral reflectance functions, the illumination spectrum is used, which is required to measure in advance by using the LRSS and a white reference. We can see on the graph of the spectral reflectance function, a typical spectral characteristic of the reflectance of human skin with the hemoglobin absorption at 540 nm and 575 nm. This supports the validity of the system, though the quantitative evaluation is difficult for moving subjects like hands. As for video of color reproduction, though the afterimage was noticeable because of the long exposure time of LRSS, it can be resolved by introducing a higher intensity illumination or a higher sensitivity imaging sensor for LRSS.

 

Fig. 12 Single-frame excerpts from video of screen capture of display when human hand is measured by prototype of hybrid resolution spectral imaging system (Media 1). The right-hand window shows the color image reproduction from the spectral image data, the bottom-left window shows the LRSS measurement as a color image, and the top-left window shows the graph of the spectral reflectance functions at the positions pointed by the mouse click on the right-hand window. In this window, only 400-700nm range is shown.

Download Full Size | PPT Slide | PDF

6. Discussions

The experiments demonstrated the spectral accuracy of the prototype system and the feasibility of the capturing and displaying spectral videos. However, the current system has several problems to be solved, which are discussed in this section.

It is necessary for the LRSS to acquire enough number of spectral data from every color such that the spectral basis functions are derived accurately. However, the pixel number of the LRSS was small in the prototype system (68 pixels). The accuracy would decrease if the object has larger color variation. It is the reason why the object used in the experiment had small color variation. In addition, if the target scene has spatially fine structures, the mixed spectra may be observed by the LRSS. In such a case, multiple spectral data with different mixture ratio should be acquired for accurately deriving the spectral basis functions. Therefore, it is expected to increase the pixel number of the LRSS for the target scenes with more complex structure. It may be also valuable to apply spectral unmixing of low-resolution spectral data for better accuracy [25].

The frame rate was relatively low and afterimage in the reconstructed video was prominent in the demonstration presented in this paper. The time for signal processing was not a bottle neck for the condition of the experiments: 30fps of 1.3Mega pixels. The reason of low frame rate was the long exposure time required in the LRSS. In order to enable shorter exposure time, we need to use an alternative image sensor with higher sensitivity. Spatial binning can be also applied to improve the signal-to-noise ratio in low-intensity channels. Then it is not very difficult to reduce the exposure time to 1/10 of the current system and to realize video-rate reproduction.

Let us consider the spatial and spectral resolution of spectral images obtained by the prototype system. As mentioned in Sec. 2, under the specific condition that the number of the spectral basis functions of the spectra contained in a scene (or a limited region of a scene) is not larger than three, the spectral image data can be reconstructed completely by the HRSI system. In this case, the spectral resolution is equivalent to that of high-spectral-resolution data obtained by the LRSS (381 spectral samplings in 400-780 nm with 3.73 nm spectral resolution in the prototype system). In addition, the spatial resolution corresponds with that of the high-resolution RGB image data (640 × 720 spatial resolution in the prototype system). On the other hand, if the above condition on the number of spectral basis functions is not satisfied, reconstructed spectral image data contain some error. The error does not necessarily appear as the degradation of the resolution in spectral and spatial direction, but the spatial resolution of the components of higher-order spectral basis functions is affected by that of the LRSS.

7. Conclusions

This paper presents the prototype video system of HRSI which realizes spectral video capturing, processing, and visualization. As a fiber array is used in the LRSS, the imaging lens can be separated from the spectrometer. High-resolution RGB and low-resolution spectral images are captured with distinct lenses without using beam splitter, and therefore the set of the RGB camera and the LRSS lens head module of the LRSS becomes compact and lightweight.

In the experiment, it was confirmed that the spectra can be measured with less or around 10% NRMSE. There is a difficulty in the experimental evaluation of spectral accuracy, that is, the accuracy depends on the complexity of target scene and the evaluation using a color chart is not sufficient. Thus we captured a target object that contains several different colors, spectral data were picked up from the reconstructed image, and compared with the spectra measured from the same object. It was also visually confirmed that the high spatial resolution was achieved in the reconstructed spectral image. In addition, spectral video visualization was demonstrated with waving human hand.

In principle, the method applied here is considered to be less sensitive to the image registration and the pixel aperture size of the LRSS, compared with the bilateral interpolation based methods presented in [21, 22]. Although the sensor models in bilateral interpolation methods are different from that of this paper, it is expected to do the systematic comparison by modifying the methods to adapt to the LRSS presented here.

There are some limitations in the current system. Since the number of the pixels of the LRSS is only 68, we cannot expect high accuracy if the target object has complex spatial and spectral distribution. It is expected to increase the pixel number of the LRSS to enhance the applicability and the feasibility of the proposed system. Next, the frame-rate should be improved for true real-time system. For this purpose, the exposure time should be reduced by using higher sensitivity camera and modifying the optical system arrangement of the LRSS. As the number of pixels assigned to the spectral dimension in the LRSS is large, it is also possible to apply a pixel binning technique. Then it will be possible to shorten the exposure time to 1/10 and 30 fps will be realized. In addition, the spectral accuracy is limited by the calibration error in the spectral sensitivity of RGB camera. The accurate measurement of spectral sensitivity requires much effort. A method for easier calibration of RGB camera is another issue for future consideration.

Acknowledgments

The authors appreciate the huge contribution of Ms. Asami Tanji on the experiments. Also the authors deeply acknowledge anonymous reviewers for their very insightful comments that greatly improve the manuscript. This research is supported by Japan Society for the Promotion of Science KAKENHI (23135509, 25135712).

References and links

1. M. Hauta-Kasari, K. Miyazawa, S. Toyooka, and J. Parkkinen, “Spectral vision system for measuring color images,” J. Opt. Soc. Am. A 16(10), 2352–2362 (1999). [CrossRef]  

2. B. Hill, “Color capture, color management and the problem of metamerism,” Proc. SPIE 3963, 3–14 (2000).

3. H. Haneishi, T. Hasegawa, A. Hosoi, Y. Yokoyama, N. Tsumura, and Y. Miyake, “System design for accurately estimating the spectral reflectance of art paintings,” Appl. Opt. 39(35), 6621–6632 (2000). [CrossRef]   [PubMed]  

4. J. Y. Hardeberg, F. Schmitt, and H. Brettel, “Multispectral color image capture using a liquid crystal tunable filter,” Opt. Eng. 41(10), 2532–2548 (2002). [CrossRef]  

5. M. Yamaguchi, H. Haneishi, and N. Ohyama, “Beyond red-green-blue (RGB): spectrum-based color imaging technology,” J. Imaging Sci. Technol. 52(1), 010201 (2008). [CrossRef]  

6. J. M. Eichenholz, N. Barnett, Y. Juang, D. Fish, S. Spano, E. Lindsley, and D. L. Farkas, “Real-time megapixel multispectral bioimaging,” Proc. SPIE 7568, 75681L (2010). [CrossRef]  

7. J. Park, M. Lee, M. Grossberg, and S. Nayar, “Multispectral imaging using multiplexed illumination,” in Proceedings of IEEE Conference on Computer Vision (Institute of Electrical and Electronics Engineers, New York, 2007), pp. 1–8. [CrossRef]  

8. A. Hirai, T. Inoue, K. Itoh, and Y. Ichioka, “Application of multiple-image Fourier transform spectral imaging to measurement of fast phenomena,” Opt. Rev. 1, 205–207 (1994). [CrossRef]  

9. A. Hirai, M. Hashimoto, K. Itoh, and Y. Ichioka, “Multichannel spectral imaging system for measurements with the highest signal-to-noise ratio,” Opt. Rev. 4(2), 334–341 (1997). [CrossRef]  

10. K. Ohsawa, T. Ajito, H. Fukuda, Y. Komiya, H. Haneishi, M. Yamaguchi, and N. Ohyama, “Six-band HDTV camera system for spectrum-based color reproduction,” J. Imaging Sci. Technol. 48, 85–92 (2004).

11. S. A. Mathews, “Design and fabrication of a low-cost, multispectral imaging system,” Appl. Opt. 47(28), F71–F76 (2008). [CrossRef]   [PubMed]  

12. L. Gao, R. T. Kester, N. Hagen, and T. S. Tkaczyk, “Snapshot Image Mapping Spectrometer (IMS) with high sampling density for hyperspectral microscopy,” Opt. Express 18(14), 14330–14344 (2010). [CrossRef]   [PubMed]  

13. A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 47(10), B44–B51 (2008). [CrossRef]   [PubMed]  

14. T. Sun, and K. Kelly, “Compressive sensing hyperspectral imager,” in Frontiers in Optics 2009/Laser Science XXV/Fall 2009 OSA Optics & Photonics Technical Digest, OSA Technical Digest (CD) (Optical Society of America, 2009), CTuA5.

15. H. Argeullo and G. R. Arce, “Coded aperture optimization for spectrally agile compressive imaging,” J. Opt. Soc. Am. A 28(11), 2400–2413 (2011). [CrossRef]  

16. Y. August, C. Vachman, Y. Rivenson, and A. Stern, “Compressive hyperspectral imaging by random separable projections in both the spatial and the spectral domains,” Appl. Opt. 52(10), D46–D54 (2013). [CrossRef]   [PubMed]  

17. F. Soldevila, E. Irles, V. Durán, P. Clemente, M. Fernández-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113(4), 551–558 (2013). [CrossRef]  

18. P. S. Chavez, S. C. Sides, and J. A. Anderson, “Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic,” Photo. Eng. Rem. Sens. 57, 295–303 (1991).

19. F. Imai and F. S. Berns, “High-resolution multi-spectral image archives: a hybrid approach,” in Proceedings of 6th Color Imaging Conference (Curran Associates, Inc., 1998), pp. 224–227.

20. R. Kawakami, J. Wright, Y. Tai, Y. Matsushita, M. Ben-Ezra, and K. Ikeuchi, “High-resolution hyperspectral imaging via matrix factorization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 2011), pp. 2329–2336.

21. X. Cao, X. Tong, Q. Dai, and S. Lin, “High resolution multispectral video capture with a hybrid camera system,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 2011), pp. 297–304. [CrossRef]  

22. C. Ma, X. Cao, R. Wu, and Q. Dai, “Content-adaptive high-resolution hyperspectral video acquisition with a hybrid camera system,” Opt. Lett. 39(4), 937–940 (2014). [CrossRef]   [PubMed]  

23. Y. Murakami, K. Ietomi, M. Yamaguchi, and N. Ohyama, “Maximum a posteriori estimation of spectral reflectance from color image and multipoint spectral measurements,” Appl. Opt. 46(28), 7068–7082 (2007). [CrossRef]   [PubMed]  

24. Y. Murakami, M. Yamaguchi, and N. Ohyama, “Piecewise Wiener estimation for reconstruction of spectral reflectance image by multipoint spectral measurements,” Appl. Opt. 48(11), 2188–2202 (2009). [CrossRef]   [PubMed]  

25. Y. Murakami, M. Yamaguchi, and N. Ohyama, “Class-based spectral reconstruction based on unmixing of low-resolution spectral information,” JOSA A 28(7), 1470–1481 (2011). [CrossRef]  

26. Y. Murakami, A. Tanji, and M. Yamaguchi, “Development of low-resolution spectral imager and its application to hybrid-resolution spectral imaging,” in Proceedings of 12th Congress of the International Colour Association (The Colour Group, GB, 2013), pp. 363–366.

27. W. K. Pratt and C. E. Mancill, “Spectral estimation techniques for the spectral calibration of a color image scanner,” Appl. Opt. 15(1), 73–75 (1976). [CrossRef]   [PubMed]  

28. H. Matsuoka, Y. Kosai, M. Saito, N. Takeyama, and H. Suto, “Single-cell viability assessment with a novel spectro-imaging system,” J. Biotechnol. 94(3), 299–308 (2002). [CrossRef]   [PubMed]  

29. T. Mahalakshmi, R. Muthaiah, and P. Swaminathan, “Review article: an overview of template matching technique in image processing,” Res. J. Appl. Sci. Eng. Technol. 4, 5469–5473 (2012).

30. N. Hashimoto, Y. Murakami, P. A. Bautista, M. Yamaguchi, T. Obi, N. Ohyama, K. Uto, and Y. Kosugi, “Multispectral image enhancement for effective visualization,” Opt. Express 19(10), 9315–9329 (2011). [CrossRef]   [PubMed]  

References

  • View by:
  • |
  • |
  • |

  1. M. Hauta-Kasari, K. Miyazawa, S. Toyooka, and J. Parkkinen, “Spectral vision system for measuring color images,” J. Opt. Soc. Am. A 16(10), 2352–2362 (1999).
    [Crossref]
  2. B. Hill, “Color capture, color management and the problem of metamerism,” Proc. SPIE 3963, 3–14 (2000).
  3. H. Haneishi, T. Hasegawa, A. Hosoi, Y. Yokoyama, N. Tsumura, and Y. Miyake, “System design for accurately estimating the spectral reflectance of art paintings,” Appl. Opt. 39(35), 6621–6632 (2000).
    [Crossref] [PubMed]
  4. J. Y. Hardeberg, F. Schmitt, and H. Brettel, “Multispectral color image capture using a liquid crystal tunable filter,” Opt. Eng. 41(10), 2532–2548 (2002).
    [Crossref]
  5. M. Yamaguchi, H. Haneishi, and N. Ohyama, “Beyond red-green-blue (RGB): spectrum-based color imaging technology,” J. Imaging Sci. Technol. 52(1), 010201 (2008).
    [Crossref]
  6. J. M. Eichenholz, N. Barnett, Y. Juang, D. Fish, S. Spano, E. Lindsley, and D. L. Farkas, “Real-time megapixel multispectral bioimaging,” Proc. SPIE 7568, 75681L (2010).
    [Crossref]
  7. J. Park, M. Lee, M. Grossberg, and S. Nayar, “Multispectral imaging using multiplexed illumination,” in Proceedings of IEEE Conference on Computer Vision (Institute of Electrical and Electronics Engineers, New York, 2007), pp. 1–8.
    [Crossref]
  8. A. Hirai, T. Inoue, K. Itoh, and Y. Ichioka, “Application of multiple-image Fourier transform spectral imaging to measurement of fast phenomena,” Opt. Rev. 1, 205–207 (1994).
    [Crossref]
  9. A. Hirai, M. Hashimoto, K. Itoh, and Y. Ichioka, “Multichannel spectral imaging system for measurements with the highest signal-to-noise ratio,” Opt. Rev. 4(2), 334–341 (1997).
    [Crossref]
  10. K. Ohsawa, T. Ajito, H. Fukuda, Y. Komiya, H. Haneishi, M. Yamaguchi, and N. Ohyama, “Six-band HDTV camera system for spectrum-based color reproduction,” J. Imaging Sci. Technol. 48, 85–92 (2004).
  11. S. A. Mathews, “Design and fabrication of a low-cost, multispectral imaging system,” Appl. Opt. 47(28), F71–F76 (2008).
    [Crossref] [PubMed]
  12. L. Gao, R. T. Kester, N. Hagen, and T. S. Tkaczyk, “Snapshot Image Mapping Spectrometer (IMS) with high sampling density for hyperspectral microscopy,” Opt. Express 18(14), 14330–14344 (2010).
    [Crossref] [PubMed]
  13. A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 47(10), B44–B51 (2008).
    [Crossref] [PubMed]
  14. T. Sun, and K. Kelly, “Compressive sensing hyperspectral imager,” in Frontiers in Optics 2009/Laser Science XXV/Fall 2009 OSA Optics & Photonics Technical Digest, OSA Technical Digest (CD) (Optical Society of America, 2009), CTuA5.
  15. H. Argeullo and G. R. Arce, “Coded aperture optimization for spectrally agile compressive imaging,” J. Opt. Soc. Am. A 28(11), 2400–2413 (2011).
    [Crossref]
  16. Y. August, C. Vachman, Y. Rivenson, and A. Stern, “Compressive hyperspectral imaging by random separable projections in both the spatial and the spectral domains,” Appl. Opt. 52(10), D46–D54 (2013).
    [Crossref] [PubMed]
  17. F. Soldevila, E. Irles, V. Durán, P. Clemente, M. Fernández-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113(4), 551–558 (2013).
    [Crossref]
  18. P. S. Chavez, S. C. Sides, and J. A. Anderson, “Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic,” Photo. Eng. Rem. Sens. 57, 295–303 (1991).
  19. F. Imai and F. S. Berns, “High-resolution multi-spectral image archives: a hybrid approach,” in Proceedings of 6th Color Imaging Conference (Curran Associates, Inc., 1998), pp. 224–227.
  20. R. Kawakami, J. Wright, Y. Tai, Y. Matsushita, M. Ben-Ezra, and K. Ikeuchi, “High-resolution hyperspectral imaging via matrix factorization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 2011), pp. 2329–2336.
  21. X. Cao, X. Tong, Q. Dai, and S. Lin, “High resolution multispectral video capture with a hybrid camera system,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 2011), pp. 297–304.
    [Crossref]
  22. C. Ma, X. Cao, R. Wu, and Q. Dai, “Content-adaptive high-resolution hyperspectral video acquisition with a hybrid camera system,” Opt. Lett. 39(4), 937–940 (2014).
    [Crossref] [PubMed]
  23. Y. Murakami, K. Ietomi, M. Yamaguchi, and N. Ohyama, “Maximum a posteriori estimation of spectral reflectance from color image and multipoint spectral measurements,” Appl. Opt. 46(28), 7068–7082 (2007).
    [Crossref] [PubMed]
  24. Y. Murakami, M. Yamaguchi, and N. Ohyama, “Piecewise Wiener estimation for reconstruction of spectral reflectance image by multipoint spectral measurements,” Appl. Opt. 48(11), 2188–2202 (2009).
    [Crossref] [PubMed]
  25. Y. Murakami, M. Yamaguchi, and N. Ohyama, “Class-based spectral reconstruction based on unmixing of low-resolution spectral information,” JOSA A 28(7), 1470–1481 (2011).
    [Crossref]
  26. Y. Murakami, A. Tanji, and M. Yamaguchi, “Development of low-resolution spectral imager and its application to hybrid-resolution spectral imaging,” in Proceedings of 12th Congress of the International Colour Association (The Colour Group, GB, 2013), pp. 363–366.
  27. W. K. Pratt and C. E. Mancill, “Spectral estimation techniques for the spectral calibration of a color image scanner,” Appl. Opt. 15(1), 73–75 (1976).
    [Crossref] [PubMed]
  28. H. Matsuoka, Y. Kosai, M. Saito, N. Takeyama, and H. Suto, “Single-cell viability assessment with a novel spectro-imaging system,” J. Biotechnol. 94(3), 299–308 (2002).
    [Crossref] [PubMed]
  29. T. Mahalakshmi, R. Muthaiah, and P. Swaminathan, “Review article: an overview of template matching technique in image processing,” Res. J. Appl. Sci. Eng. Technol. 4, 5469–5473 (2012).
  30. N. Hashimoto, Y. Murakami, P. A. Bautista, M. Yamaguchi, T. Obi, N. Ohyama, K. Uto, and Y. Kosugi, “Multispectral image enhancement for effective visualization,” Opt. Express 19(10), 9315–9329 (2011).
    [Crossref] [PubMed]

2014 (1)

2013 (2)

Y. August, C. Vachman, Y. Rivenson, and A. Stern, “Compressive hyperspectral imaging by random separable projections in both the spatial and the spectral domains,” Appl. Opt. 52(10), D46–D54 (2013).
[Crossref] [PubMed]

F. Soldevila, E. Irles, V. Durán, P. Clemente, M. Fernández-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113(4), 551–558 (2013).
[Crossref]

2012 (1)

T. Mahalakshmi, R. Muthaiah, and P. Swaminathan, “Review article: an overview of template matching technique in image processing,” Res. J. Appl. Sci. Eng. Technol. 4, 5469–5473 (2012).

2011 (3)

2010 (2)

L. Gao, R. T. Kester, N. Hagen, and T. S. Tkaczyk, “Snapshot Image Mapping Spectrometer (IMS) with high sampling density for hyperspectral microscopy,” Opt. Express 18(14), 14330–14344 (2010).
[Crossref] [PubMed]

J. M. Eichenholz, N. Barnett, Y. Juang, D. Fish, S. Spano, E. Lindsley, and D. L. Farkas, “Real-time megapixel multispectral bioimaging,” Proc. SPIE 7568, 75681L (2010).
[Crossref]

2009 (1)

2008 (3)

2007 (1)

2004 (1)

K. Ohsawa, T. Ajito, H. Fukuda, Y. Komiya, H. Haneishi, M. Yamaguchi, and N. Ohyama, “Six-band HDTV camera system for spectrum-based color reproduction,” J. Imaging Sci. Technol. 48, 85–92 (2004).

2002 (2)

J. Y. Hardeberg, F. Schmitt, and H. Brettel, “Multispectral color image capture using a liquid crystal tunable filter,” Opt. Eng. 41(10), 2532–2548 (2002).
[Crossref]

H. Matsuoka, Y. Kosai, M. Saito, N. Takeyama, and H. Suto, “Single-cell viability assessment with a novel spectro-imaging system,” J. Biotechnol. 94(3), 299–308 (2002).
[Crossref] [PubMed]

2000 (2)

1999 (1)

1997 (1)

A. Hirai, M. Hashimoto, K. Itoh, and Y. Ichioka, “Multichannel spectral imaging system for measurements with the highest signal-to-noise ratio,” Opt. Rev. 4(2), 334–341 (1997).
[Crossref]

1994 (1)

A. Hirai, T. Inoue, K. Itoh, and Y. Ichioka, “Application of multiple-image Fourier transform spectral imaging to measurement of fast phenomena,” Opt. Rev. 1, 205–207 (1994).
[Crossref]

1991 (1)

P. S. Chavez, S. C. Sides, and J. A. Anderson, “Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic,” Photo. Eng. Rem. Sens. 57, 295–303 (1991).

1976 (1)

Ajito, T.

K. Ohsawa, T. Ajito, H. Fukuda, Y. Komiya, H. Haneishi, M. Yamaguchi, and N. Ohyama, “Six-band HDTV camera system for spectrum-based color reproduction,” J. Imaging Sci. Technol. 48, 85–92 (2004).

Anderson, J. A.

P. S. Chavez, S. C. Sides, and J. A. Anderson, “Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic,” Photo. Eng. Rem. Sens. 57, 295–303 (1991).

Arce, G. R.

Argeullo, H.

August, Y.

Barnett, N.

J. M. Eichenholz, N. Barnett, Y. Juang, D. Fish, S. Spano, E. Lindsley, and D. L. Farkas, “Real-time megapixel multispectral bioimaging,” Proc. SPIE 7568, 75681L (2010).
[Crossref]

Bautista, P. A.

Brady, D.

Brettel, H.

J. Y. Hardeberg, F. Schmitt, and H. Brettel, “Multispectral color image capture using a liquid crystal tunable filter,” Opt. Eng. 41(10), 2532–2548 (2002).
[Crossref]

Cao, X.

Chavez, P. S.

P. S. Chavez, S. C. Sides, and J. A. Anderson, “Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic,” Photo. Eng. Rem. Sens. 57, 295–303 (1991).

Clemente, P.

F. Soldevila, E. Irles, V. Durán, P. Clemente, M. Fernández-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113(4), 551–558 (2013).
[Crossref]

Dai, Q.

Durán, V.

F. Soldevila, E. Irles, V. Durán, P. Clemente, M. Fernández-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113(4), 551–558 (2013).
[Crossref]

Eichenholz, J. M.

J. M. Eichenholz, N. Barnett, Y. Juang, D. Fish, S. Spano, E. Lindsley, and D. L. Farkas, “Real-time megapixel multispectral bioimaging,” Proc. SPIE 7568, 75681L (2010).
[Crossref]

Farkas, D. L.

J. M. Eichenholz, N. Barnett, Y. Juang, D. Fish, S. Spano, E. Lindsley, and D. L. Farkas, “Real-time megapixel multispectral bioimaging,” Proc. SPIE 7568, 75681L (2010).
[Crossref]

Fernández-Alonso, M.

F. Soldevila, E. Irles, V. Durán, P. Clemente, M. Fernández-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113(4), 551–558 (2013).
[Crossref]

Fish, D.

J. M. Eichenholz, N. Barnett, Y. Juang, D. Fish, S. Spano, E. Lindsley, and D. L. Farkas, “Real-time megapixel multispectral bioimaging,” Proc. SPIE 7568, 75681L (2010).
[Crossref]

Fukuda, H.

K. Ohsawa, T. Ajito, H. Fukuda, Y. Komiya, H. Haneishi, M. Yamaguchi, and N. Ohyama, “Six-band HDTV camera system for spectrum-based color reproduction,” J. Imaging Sci. Technol. 48, 85–92 (2004).

Gao, L.

Hagen, N.

Haneishi, H.

M. Yamaguchi, H. Haneishi, and N. Ohyama, “Beyond red-green-blue (RGB): spectrum-based color imaging technology,” J. Imaging Sci. Technol. 52(1), 010201 (2008).
[Crossref]

K. Ohsawa, T. Ajito, H. Fukuda, Y. Komiya, H. Haneishi, M. Yamaguchi, and N. Ohyama, “Six-band HDTV camera system for spectrum-based color reproduction,” J. Imaging Sci. Technol. 48, 85–92 (2004).

H. Haneishi, T. Hasegawa, A. Hosoi, Y. Yokoyama, N. Tsumura, and Y. Miyake, “System design for accurately estimating the spectral reflectance of art paintings,” Appl. Opt. 39(35), 6621–6632 (2000).
[Crossref] [PubMed]

Hardeberg, J. Y.

J. Y. Hardeberg, F. Schmitt, and H. Brettel, “Multispectral color image capture using a liquid crystal tunable filter,” Opt. Eng. 41(10), 2532–2548 (2002).
[Crossref]

Hasegawa, T.

Hashimoto, M.

A. Hirai, M. Hashimoto, K. Itoh, and Y. Ichioka, “Multichannel spectral imaging system for measurements with the highest signal-to-noise ratio,” Opt. Rev. 4(2), 334–341 (1997).
[Crossref]

Hashimoto, N.

Hauta-Kasari, M.

Hill, B.

B. Hill, “Color capture, color management and the problem of metamerism,” Proc. SPIE 3963, 3–14 (2000).

Hirai, A.

A. Hirai, M. Hashimoto, K. Itoh, and Y. Ichioka, “Multichannel spectral imaging system for measurements with the highest signal-to-noise ratio,” Opt. Rev. 4(2), 334–341 (1997).
[Crossref]

A. Hirai, T. Inoue, K. Itoh, and Y. Ichioka, “Application of multiple-image Fourier transform spectral imaging to measurement of fast phenomena,” Opt. Rev. 1, 205–207 (1994).
[Crossref]

Hosoi, A.

Ichioka, Y.

A. Hirai, M. Hashimoto, K. Itoh, and Y. Ichioka, “Multichannel spectral imaging system for measurements with the highest signal-to-noise ratio,” Opt. Rev. 4(2), 334–341 (1997).
[Crossref]

A. Hirai, T. Inoue, K. Itoh, and Y. Ichioka, “Application of multiple-image Fourier transform spectral imaging to measurement of fast phenomena,” Opt. Rev. 1, 205–207 (1994).
[Crossref]

Ietomi, K.

Inoue, T.

A. Hirai, T. Inoue, K. Itoh, and Y. Ichioka, “Application of multiple-image Fourier transform spectral imaging to measurement of fast phenomena,” Opt. Rev. 1, 205–207 (1994).
[Crossref]

Irles, E.

F. Soldevila, E. Irles, V. Durán, P. Clemente, M. Fernández-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113(4), 551–558 (2013).
[Crossref]

Itoh, K.

A. Hirai, M. Hashimoto, K. Itoh, and Y. Ichioka, “Multichannel spectral imaging system for measurements with the highest signal-to-noise ratio,” Opt. Rev. 4(2), 334–341 (1997).
[Crossref]

A. Hirai, T. Inoue, K. Itoh, and Y. Ichioka, “Application of multiple-image Fourier transform spectral imaging to measurement of fast phenomena,” Opt. Rev. 1, 205–207 (1994).
[Crossref]

John, R.

Juang, Y.

J. M. Eichenholz, N. Barnett, Y. Juang, D. Fish, S. Spano, E. Lindsley, and D. L. Farkas, “Real-time megapixel multispectral bioimaging,” Proc. SPIE 7568, 75681L (2010).
[Crossref]

Kester, R. T.

Komiya, Y.

K. Ohsawa, T. Ajito, H. Fukuda, Y. Komiya, H. Haneishi, M. Yamaguchi, and N. Ohyama, “Six-band HDTV camera system for spectrum-based color reproduction,” J. Imaging Sci. Technol. 48, 85–92 (2004).

Kosai, Y.

H. Matsuoka, Y. Kosai, M. Saito, N. Takeyama, and H. Suto, “Single-cell viability assessment with a novel spectro-imaging system,” J. Biotechnol. 94(3), 299–308 (2002).
[Crossref] [PubMed]

Kosugi, Y.

Lancis, J.

F. Soldevila, E. Irles, V. Durán, P. Clemente, M. Fernández-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113(4), 551–558 (2013).
[Crossref]

Lindsley, E.

J. M. Eichenholz, N. Barnett, Y. Juang, D. Fish, S. Spano, E. Lindsley, and D. L. Farkas, “Real-time megapixel multispectral bioimaging,” Proc. SPIE 7568, 75681L (2010).
[Crossref]

Ma, C.

Mahalakshmi, T.

T. Mahalakshmi, R. Muthaiah, and P. Swaminathan, “Review article: an overview of template matching technique in image processing,” Res. J. Appl. Sci. Eng. Technol. 4, 5469–5473 (2012).

Mancill, C. E.

Mathews, S. A.

Matsuoka, H.

H. Matsuoka, Y. Kosai, M. Saito, N. Takeyama, and H. Suto, “Single-cell viability assessment with a novel spectro-imaging system,” J. Biotechnol. 94(3), 299–308 (2002).
[Crossref] [PubMed]

Miyake, Y.

Miyazawa, K.

Murakami, Y.

Muthaiah, R.

T. Mahalakshmi, R. Muthaiah, and P. Swaminathan, “Review article: an overview of template matching technique in image processing,” Res. J. Appl. Sci. Eng. Technol. 4, 5469–5473 (2012).

Obi, T.

Ohsawa, K.

K. Ohsawa, T. Ajito, H. Fukuda, Y. Komiya, H. Haneishi, M. Yamaguchi, and N. Ohyama, “Six-band HDTV camera system for spectrum-based color reproduction,” J. Imaging Sci. Technol. 48, 85–92 (2004).

Ohyama, N.

N. Hashimoto, Y. Murakami, P. A. Bautista, M. Yamaguchi, T. Obi, N. Ohyama, K. Uto, and Y. Kosugi, “Multispectral image enhancement for effective visualization,” Opt. Express 19(10), 9315–9329 (2011).
[Crossref] [PubMed]

Y. Murakami, M. Yamaguchi, and N. Ohyama, “Class-based spectral reconstruction based on unmixing of low-resolution spectral information,” JOSA A 28(7), 1470–1481 (2011).
[Crossref]

Y. Murakami, M. Yamaguchi, and N. Ohyama, “Piecewise Wiener estimation for reconstruction of spectral reflectance image by multipoint spectral measurements,” Appl. Opt. 48(11), 2188–2202 (2009).
[Crossref] [PubMed]

M. Yamaguchi, H. Haneishi, and N. Ohyama, “Beyond red-green-blue (RGB): spectrum-based color imaging technology,” J. Imaging Sci. Technol. 52(1), 010201 (2008).
[Crossref]

Y. Murakami, K. Ietomi, M. Yamaguchi, and N. Ohyama, “Maximum a posteriori estimation of spectral reflectance from color image and multipoint spectral measurements,” Appl. Opt. 46(28), 7068–7082 (2007).
[Crossref] [PubMed]

K. Ohsawa, T. Ajito, H. Fukuda, Y. Komiya, H. Haneishi, M. Yamaguchi, and N. Ohyama, “Six-band HDTV camera system for spectrum-based color reproduction,” J. Imaging Sci. Technol. 48, 85–92 (2004).

Parkkinen, J.

Pratt, W. K.

Rivenson, Y.

Saito, M.

H. Matsuoka, Y. Kosai, M. Saito, N. Takeyama, and H. Suto, “Single-cell viability assessment with a novel spectro-imaging system,” J. Biotechnol. 94(3), 299–308 (2002).
[Crossref] [PubMed]

Schmitt, F.

J. Y. Hardeberg, F. Schmitt, and H. Brettel, “Multispectral color image capture using a liquid crystal tunable filter,” Opt. Eng. 41(10), 2532–2548 (2002).
[Crossref]

Sides, S. C.

P. S. Chavez, S. C. Sides, and J. A. Anderson, “Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic,” Photo. Eng. Rem. Sens. 57, 295–303 (1991).

Soldevila, F.

F. Soldevila, E. Irles, V. Durán, P. Clemente, M. Fernández-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113(4), 551–558 (2013).
[Crossref]

Spano, S.

J. M. Eichenholz, N. Barnett, Y. Juang, D. Fish, S. Spano, E. Lindsley, and D. L. Farkas, “Real-time megapixel multispectral bioimaging,” Proc. SPIE 7568, 75681L (2010).
[Crossref]

Stern, A.

Suto, H.

H. Matsuoka, Y. Kosai, M. Saito, N. Takeyama, and H. Suto, “Single-cell viability assessment with a novel spectro-imaging system,” J. Biotechnol. 94(3), 299–308 (2002).
[Crossref] [PubMed]

Swaminathan, P.

T. Mahalakshmi, R. Muthaiah, and P. Swaminathan, “Review article: an overview of template matching technique in image processing,” Res. J. Appl. Sci. Eng. Technol. 4, 5469–5473 (2012).

Tajahuerce, E.

F. Soldevila, E. Irles, V. Durán, P. Clemente, M. Fernández-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113(4), 551–558 (2013).
[Crossref]

Takeyama, N.

H. Matsuoka, Y. Kosai, M. Saito, N. Takeyama, and H. Suto, “Single-cell viability assessment with a novel spectro-imaging system,” J. Biotechnol. 94(3), 299–308 (2002).
[Crossref] [PubMed]

Tkaczyk, T. S.

Toyooka, S.

Tsumura, N.

Uto, K.

Vachman, C.

Wagadarikar, A.

Willett, R.

Wu, R.

Yamaguchi, M.

N. Hashimoto, Y. Murakami, P. A. Bautista, M. Yamaguchi, T. Obi, N. Ohyama, K. Uto, and Y. Kosugi, “Multispectral image enhancement for effective visualization,” Opt. Express 19(10), 9315–9329 (2011).
[Crossref] [PubMed]

Y. Murakami, M. Yamaguchi, and N. Ohyama, “Class-based spectral reconstruction based on unmixing of low-resolution spectral information,” JOSA A 28(7), 1470–1481 (2011).
[Crossref]

Y. Murakami, M. Yamaguchi, and N. Ohyama, “Piecewise Wiener estimation for reconstruction of spectral reflectance image by multipoint spectral measurements,” Appl. Opt. 48(11), 2188–2202 (2009).
[Crossref] [PubMed]

M. Yamaguchi, H. Haneishi, and N. Ohyama, “Beyond red-green-blue (RGB): spectrum-based color imaging technology,” J. Imaging Sci. Technol. 52(1), 010201 (2008).
[Crossref]

Y. Murakami, K. Ietomi, M. Yamaguchi, and N. Ohyama, “Maximum a posteriori estimation of spectral reflectance from color image and multipoint spectral measurements,” Appl. Opt. 46(28), 7068–7082 (2007).
[Crossref] [PubMed]

K. Ohsawa, T. Ajito, H. Fukuda, Y. Komiya, H. Haneishi, M. Yamaguchi, and N. Ohyama, “Six-band HDTV camera system for spectrum-based color reproduction,” J. Imaging Sci. Technol. 48, 85–92 (2004).

Yokoyama, Y.

Appl. Opt. (7)

Appl. Phys. B (1)

F. Soldevila, E. Irles, V. Durán, P. Clemente, M. Fernández-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113(4), 551–558 (2013).
[Crossref]

J. Biotechnol. (1)

H. Matsuoka, Y. Kosai, M. Saito, N. Takeyama, and H. Suto, “Single-cell viability assessment with a novel spectro-imaging system,” J. Biotechnol. 94(3), 299–308 (2002).
[Crossref] [PubMed]

J. Imaging Sci. Technol. (2)

K. Ohsawa, T. Ajito, H. Fukuda, Y. Komiya, H. Haneishi, M. Yamaguchi, and N. Ohyama, “Six-band HDTV camera system for spectrum-based color reproduction,” J. Imaging Sci. Technol. 48, 85–92 (2004).

M. Yamaguchi, H. Haneishi, and N. Ohyama, “Beyond red-green-blue (RGB): spectrum-based color imaging technology,” J. Imaging Sci. Technol. 52(1), 010201 (2008).
[Crossref]

J. Opt. Soc. Am. A (2)

JOSA A (1)

Y. Murakami, M. Yamaguchi, and N. Ohyama, “Class-based spectral reconstruction based on unmixing of low-resolution spectral information,” JOSA A 28(7), 1470–1481 (2011).
[Crossref]

Opt. Eng. (1)

J. Y. Hardeberg, F. Schmitt, and H. Brettel, “Multispectral color image capture using a liquid crystal tunable filter,” Opt. Eng. 41(10), 2532–2548 (2002).
[Crossref]

Opt. Express (2)

Opt. Lett. (1)

Opt. Rev. (2)

A. Hirai, T. Inoue, K. Itoh, and Y. Ichioka, “Application of multiple-image Fourier transform spectral imaging to measurement of fast phenomena,” Opt. Rev. 1, 205–207 (1994).
[Crossref]

A. Hirai, M. Hashimoto, K. Itoh, and Y. Ichioka, “Multichannel spectral imaging system for measurements with the highest signal-to-noise ratio,” Opt. Rev. 4(2), 334–341 (1997).
[Crossref]

Photo. Eng. Rem. Sens. (1)

P. S. Chavez, S. C. Sides, and J. A. Anderson, “Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic,” Photo. Eng. Rem. Sens. 57, 295–303 (1991).

Proc. SPIE (2)

J. M. Eichenholz, N. Barnett, Y. Juang, D. Fish, S. Spano, E. Lindsley, and D. L. Farkas, “Real-time megapixel multispectral bioimaging,” Proc. SPIE 7568, 75681L (2010).
[Crossref]

B. Hill, “Color capture, color management and the problem of metamerism,” Proc. SPIE 3963, 3–14 (2000).

Res. J. Appl. Sci. Eng. Technol. (1)

T. Mahalakshmi, R. Muthaiah, and P. Swaminathan, “Review article: an overview of template matching technique in image processing,” Res. J. Appl. Sci. Eng. Technol. 4, 5469–5473 (2012).

Other (6)

Y. Murakami, A. Tanji, and M. Yamaguchi, “Development of low-resolution spectral imager and its application to hybrid-resolution spectral imaging,” in Proceedings of 12th Congress of the International Colour Association (The Colour Group, GB, 2013), pp. 363–366.

J. Park, M. Lee, M. Grossberg, and S. Nayar, “Multispectral imaging using multiplexed illumination,” in Proceedings of IEEE Conference on Computer Vision (Institute of Electrical and Electronics Engineers, New York, 2007), pp. 1–8.
[Crossref]

F. Imai and F. S. Berns, “High-resolution multi-spectral image archives: a hybrid approach,” in Proceedings of 6th Color Imaging Conference (Curran Associates, Inc., 1998), pp. 224–227.

R. Kawakami, J. Wright, Y. Tai, Y. Matsushita, M. Ben-Ezra, and K. Ikeuchi, “High-resolution hyperspectral imaging via matrix factorization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 2011), pp. 2329–2336.

X. Cao, X. Tong, Q. Dai, and S. Lin, “High resolution multispectral video capture with a hybrid camera system,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 2011), pp. 297–304.
[Crossref]

T. Sun, and K. Kelly, “Compressive sensing hyperspectral imager,” in Frontiers in Optics 2009/Laser Science XXV/Fall 2009 OSA Optics & Photonics Technical Digest, OSA Technical Digest (CD) (Optical Society of America, 2009), CTuA5.

Supplementary Material (1)

» Media 1: MP4 (8434 KB)     

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 Conceptual diagram of hybrid resolution spectral imaging.
Fig. 2
Fig. 2 Processing flow of piece-wise Wiener estimation method.
Fig. 3
Fig. 3 Schematic diagram of LRSS.
Fig. 4
Fig. 4 Photograph of LRSS.
Fig. 5
Fig. 5 Calibration data of LRSS: (a) relationship between pixel position and wavelength for fiber #1, (b) common spectral sensitivity for all fibers, and (c) sensitivity ratio among fibers.
Fig. 6
Fig. 6 Accuracy evaluation of spectra measured by LRSS by compared to the spectra measured spectrophotometer SR-3 for (a) LED lamp and (b) artificial sun light.
Fig. 7
Fig. 7 (a) Photograph of when prototype system capture image of color chart. (b) The spectral sensitivity of the RGB color camera.
Fig. 8
Fig. 8 Example set of images captured by high-resolution RGB camera and LRSS.
Fig. 9
Fig. 9 Native measurement data by RGB camera (top) and by LRSS (bottom) for green & yellow (left), blue & red (middle) color patches of color chart and flowers (right). The rectangles on color images indicate the areas for accuracy evaluation.
Fig. 10
Fig. 10 (a) The color image under CIE D65 illuminant calculated from the spectral image of flowers. The spectral image was reconstructed by the piecewise Wiener estimation using the spectral data obtained from the LRSS. (b) The reconstructed spectral radiance of the pixels along the vertical line between the triangle marks indicated at top and bottom of the image (a). (c) is the spectral radiance of the nearest fiber of the LRSS, at the pixels along the same vertical line as (b). In (b) and (c), the vertical axes indicate the corresponding pixel location in the vertical directions, and the horizontal axes designate the wavelength. The gray levels indicate the radiance values. The spatial resolution in (b) significantly higher than (c) is obtained (vertical direction), while the spectral resolution is the same (horizontal direction).
Fig. 11
Fig. 11 Estimated and measured spectra of carnations (left) and rose (right).
Fig. 12
Fig. 12 Single-frame excerpts from video of screen capture of display when human hand is measured by prototype of hybrid resolution spectral imaging system (Media 1). The right-hand window shows the color image reproduction from the spectral image data, the bottom-left window shows the LRSS measurement as a color image, and the top-left window shows the graph of the spectral reflectance functions at the positions pointed by the mouse click on the right-hand window. In this window, only 400-700nm range is shown.

Tables (1)

Tables Icon

Table 1 Normalized Root Mean Squared Error of Spectra

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

f( i,j )= r=1 R α r ( i,j ) φ r =[ φ 1 φ R ][ α 1 ( i,j ) α R ( i,j ) ]=Φα( i,j ),
g( i,j )=Hf( i,j ),
g( i,j )=[ H Φ k ]α( i,j ),
f ^ ( i,j )= Φ k [ H Φ k ] g( i,j )= A k g( i,j ),
A k = C k H T ( H C k H T + C Noise ) 1 ,
f l ( i,j )= B l Ag( i,j ),
B c =C E R E I 1 ,

Metrics