Abstract

In multispectral imaging, Wiener estimation is widely adopted for the reconstruction of spectral reflectance. We propose an improved reflectance reconstruction method by adaptively selecting training samples for the autocorrelation matrix calculation in Wiener estimation, without a prior knowledge of the spectral information of the samples being imaged. The performance of the proposed adaptive Wiener estimation and the traditional method are compared in the cases of different channel numbers and noise levels. Experimental results show that the proposed method outperforms the traditional method in terms of both spectral and colorimetric prediction errors when the imaging channel number is 7 or less. When the imaging system consists of 11 or more channels, the color accuracy of the proposed method is slightly better than or becomes close to that of the traditional method.

©2007 Optical Society of America

1. Introduction

Recently, multispectral imaging has been widely studied in the reconstruction of the spectral information of the samples being imaged and the recovery of the spectrums of illuminants [1–10]. In a multispectral imaging system, usually more than three imaging channels are used. There are also attempts to recover spectral reflectance using traditional 3-channel digital camera or color scanner [11–13].

In the literature of color imaging, several reflectance reconstruction techniques, such as Wiener estimation [2–4, 8–10], pseudoinverse method [1, 7, 11, 12], and finite-dimensional modeling [4, 11], have been proposed. Among these techniques, Wiener estimation is among the most widely adopted ones, which involves three matrixes, i.e., spectral responsivity, autocorrelation matrix of reflectance, and imaging noise. Shimano proposed a method for reflectance recovery using Wiener estimation without a prior knowledge of the spectral reflectance and system noise [2]. Murakami et al. proposed a nonlinear reflectance estimation method based on Gaussian mixture distribution and Wiener estimation [8]. Shen, et al., found that the color accuracy of the reflectance reconstruction can be improved by combining three different techniques [4].

For a real imaging system, its behavior sometimes departs from the theoretic imaging process, and therefore the special selection of training samples has obvious impact on the color accuracy of the reflectance reconstruction [8, 9, 12, 13]. To cope with this problem, Haneishi et al. proposed to divide the sensor space into several sub-blocks to improve Wiener estimation [9]. Currently, Shen and Xin introduced an optimized estimation method for spectral reflectance reconstruction [12]. Their method consists of three main steps: (1) select a subset of training samples whose responses are close to that of the test sample, (2) calculate the weighting of each training sample, and (3) calculate the transform matrix for the test sample. DiCarlo and Wandell also proposed a method to recover high-dimensional reflectance from low-dimensional response by selecting training samples adaptively [13].

In this study, we propose a method to reconstruct spectral reflectance by using a modified Wiener estimation method, without a prior knowledge of the spectral characteristics of the samples being imaged. We investigate whether the accuracy of reflectance estimation can be improved if the training samples for calculating the reflectance characteristics are adaptively selected and appropriately weighted. The novelty of the proposed method is mainly in the manner of training sample selection and autocorrelation matrix construction. The performance of the proposed adaptive Wiener estimation and the traditional methods are compared in cases of different channel numbers and different noise levels.

2. Formulation of multispectral imaging

In this study the multispectral imaging system consists of a monochrome digital camera and several narrowband filters, and the response of the camera is proportional to the intensity of light entering the sensor. Let 1(λ) be the spectral power distribution of the imaging illuminant, r(λ) be the spectral reflectance of the sample being imaged, fc(λ) be the spectral transmittance of the cth (1≤cC) channel, and s(λ) be the spectral sensitivity of the monochrome camera, then the response vc of the cth channel can be represented as

vc=l(λ)r(λ)fc(λ)S(λ)+bc+nc=mc(λ)r(λ)dλ+bc+nc,

where bc is the bias response caused by dark current, and nc is the zero-mean imaging noise. As 1(λ), fc(λ) and s(λ) are unknown a prior, they are merged into a single term mc(λ), which is referred as spectral responsivity. In practical computation, the continuous functions can be replaced by their sampled counterparts so that the integral can be written as summation. If N uniformly spaced samples are used over the visible spectrum, Eq. (1) can be written in vector and matrix notation as

v=Mr+b+n,

where v is the C×1 vector of response, M is the C×N matrix of spectral responsivity, r is the N×1 vector of reflectance, b is the C×1 vector of dark current response, and n is the C×1 vector of imaging noise. In the mathematical recovery of spectral responsivity M, two constraints, i.e., non-negativeness and smoothness, are usually imposed [12, 14]. As the filters used in this study are of narrowband, only the non-negativeness constraint for M is needed:

mij0

where 1 ≤ iC and 1 ≤jN. For the detailed solution of M, interested readers are referred to Ref. 14. If let u=v-b, Eq. (2) becomes

u=Mr+n.

3. Wiener estimation

The estimation of reflectance is to find an N×C matrix W that can transform the response u into the estimated reflectance r̂,

r̂=Wu,

such that the mean square error between the actual and predicted reflectance is minimized. In Wiener estimation, the transform matrix is

WWE=KrMT(MKrMT+Kn)1,

where K r and K n are the autocorrelation matrices of reflectance and noise, respectively:

Kr=E{rrT},
Kn=diag{σ12,σ22,,σC2}.

In this study the noises of different channels are assumed to be independent, and hence K n is a diagonal matrix. In an actual imaging system, the noise variance of the cth channel is estimated as

σ̂c2=E{ucmcr2},

where ∥x∥ denotes Frobenius norm of vector x, uc is the response of the cth channel, and m c is the spectral responsivity of the cth channel.

4. The proposed method

From Eq. (6), it is noticed that the transform matrix W we in Wiener estimation is decided by 3 matrixes, i.e., M, K r, and K n. M and K n are fixed for a given imaging system, while K r contains the correlation characteristics of the reflectances of the training samples. In traditional Wiener estimation, K r is usually calculated from all the available training samples when it is not easy to get the ensemble of the candidate sample (test sample) in some practical applications. However, it is reasonable to argue that, if the training samples for calculating K r are close to the candidate sample, the transform matrix W should contain more meaningful characteristics to the candidate sample. Based on this consideration, we propose a new method for reflectance construction, which is referred as adaptive Wiener estimation in this study.

Suppose the response of the candidate sample is u, and then its corresponding reflectance r̂ can be calculated according to the traditional Wiener estimation in Eq. (6). The training samples with reflectance r i, for calculating K r can then be selected according to their spectral similarity to r̂. In the calculation of spectral similarity, the reflectance is normalized so that its summation is equal to 1. The reason for normalization is that statistical information of the reflectance is mainly decided by its shape, not magnitude. The spectral similarity consists of two terms, i.e., mean spectral distance and maximum spectral distance:

di=αmean{ririr̂r̂}+(1α)max{ririr̂r̂},

where α is a scaling factor, |x| denotes the absolute values of the elements of vector x. In this study, we let α=0.5. For two reflectances with similar shapes, both of the mean and maximum distances should not be large. When r i, is very similar to r̂, di is close to 0.

We select L training samples according to their spectral similarities, and sort r i, (1 ≤iL) in ascending order, or more specifically, d 1d 2≤ … ≤ dL. Among the selected L training samples, it is reasonable to assume that the spectral characteristic of the more similar r i, (1 ≤iL) should be more close to that of the candidate sample. Therefore, the L training samples are not equally treated in the calculation of K r. Or equivalently, the closer training samples should contribute more in the K r calculation.

Let Ω be the set of training samples, r i, is repeated ki times:

Ω={r1r1k1times,,ririkitimes,,rL1 times},

where the repeat time ki for the ith (1 ≤iL) selected training sample is calculated as

ki=(dLdi)γ+0.5,

where the operator ⌊x⌋ rounds x to the nearest integer towards zero, γ is the exponential factor. In this study, we let γ=1. As the r L is the most dissimilar training sample to r̂, kL=1.

By this way, the number of training samples in set Ω is usually larger than L. Then the transform matrix of the adaptive Wiener estimation becomes

WAWE=Kr,ΩMT(MKr,ΩMT+Kn)1,

where K rΩ is the autocorrelation matrix of the reflectances in the training set Ω, and K n is the same as Eq. (8). By substituting Eq. (13) into Eq. (5), the spectral reflectance of the candidate sample with response u can be reconstructed.

It is noted that the matrix W AWE need to be calculated per pixel in a multispectral image and thus the proposed method is computationally expensive than traditional Wiener estimation. This is actually the common shortcoming for almost all adaptive methods. Nevertheless, it is worthwhile to adopt the adaptive method when the color accuracy is very important.

5. Experimental results

In the experiment, the multispectral imaging system includes a QImaging monochrome digital camera model Retiga-EXi, and a liquid crystal tunable filter (LCTF) made by Cambridge Research and Instrument Co. The LCTF can tune the center wavelengths of the narrowband filters electronically. The color target was GretagMacBeth Color Checker DC (CDC), and the reflectances of the color patches on the CDC were measured using a GretagMacBeth spectrophotometer 7000A with an interval of 10 nm. The multispectral images of the CDC were acquired under an approximate D65 lighting condition, and the spatial non-uniformity of the light field was corrected using a white paper. Totally 198 color patches on the CDC were used for reflectance construction, excluding the glossy ones and the duplicated darkest ones. The inherent characteristics of the imaging system, i.e., the spectral responsivity M and the noise variances in Eq. (9), were estimated using the CDC. Figure 1 shows the 31-channel spectral responsivity M. In the experiment of spectral reflectance estimation, the CDC served as the test target. The reflectances of 1269 Munsell color chips, which were obtained from the website http://spectral.joensuu.fi, were used as training samples. For the sake of data consistency with the measured CDC reflectances, the reflectances of the Munsell chips were also re-sampled at 10 nm interval. The whole collection of the Munsell reflectances is shown in Fig. 2. It is noted that the multispectral images of the Munsell chips are not needed in this study. The autocorrelation matrix K r for the traditional Wiener estimation is calculated from all the 1269 reflectances of the Munsell chips, while the matrix K r for the proposed adaptive method is calculated from the subset of the selected training reflectances specified in Eq. (11).

 figure: Fig. 1.

Fig. 1. The spectral responsivities of the 31 channels of the multispectral imaging system.

Download Full Size | PPT Slide | PDF

 figure: Fig. 2.

Fig. 2. Reflectances of the 1296 Munsell color chips.

Download Full Size | PPT Slide | PDF

To evaluate the color accuracy of the reflectance reconstruction methods in detail, the experiment was conducted on two data sets, one was synthetic data and the other was real data. In the synthetic data part, the influences of the different combinations of channel number C and noise level were investigated; in the real data part, only the influences of C were studied as the noise level was decided by the actual imaging system. The values of channel number C are 6, 7, 11, or 16. The reason for choosing these C values is because when the channel sequences are uniformly selected, both of the 1st and 31st channels can be included. Table 1 lists the corresponding channel sequences in each case.

Tables Icon

Table 2. The corresponding channel sequences of different channel number C

The performance of the proposed method is compared with the traditional Wiener estimation and the optimization method [12]. The comparison between the proposed method and the traditional Wiener estimation was carried out on both synthetic and real data. As the optimization method requires the responses of training samples, its comparison with the proposed method was conducted only on the synthetic data.

The color accuracy of the reflectance reconstruction is evaluated in terms of both spectral and colorimetric error. The spectral rms error between the actual reflectance r and its estimate r̂ is calculated as

rms=((rr̂)T(rr̂)N)12.

The colorimetric error, denoted as ΔE00 *, is calculated according to the CIEDE2000 color difference formula [15] under several standard illuminants such as D65 and F2.

5.1. Synthetic data

The objective of this part is to comprehensively evaluate the performance of the proposed method under various noise levels and different channel numbers. The spectral responsivity M is supposed to be known a prior, and the response u syn of the multispectral imaging system is synthesized as

usyn=Mr+nσ,

where n σ denotes additive Gaussian noise with zero-mean and variance σ 2. The noise variance is related to the signal-to-noise ratio (SNR):

SNR=10 log(Tr(MKrMT)σ2),

where the term Tr(MK r M T) is the average signal power captured by the multispectral imaging system. In this experiment, we investigated the color accuracy of reflectance reconstruction under three SNRs, i.e., ∞, 50, and 40. SNR=∞ means there is no noise in the imaging system. The noise variance σ 2 of the corresponding SNR can be obtained by transforming Eq. (16).

It is noted that the response u syn in Eq. (15) is simulated for each candidate sample on the CDC chart. The responses of the Munsell chips are not simulated for the proposed adaptive Wiener estimation as the selection of training samples is carried out in the reflectance space. For the optimization method, however, the responses of the Munsell samples are also synthesized as its training sample selection is conducted in the response space [12].

 figure: Fig. 3.

Fig. 3. The distribution of average reflectance rms error (left) and average color difference error (right) with respect to the training sample number L. Channel number C=6 and SNR=50.

Download Full Size | PPT Slide | PDF

It is of interest to investigate the influence of training sample number L on the accuracy of reflectance reconstruction. Figure 3 shows the distributions of average spectral error and colorimetric error with respect to L when C= 6 and SNR=50, indicating L=50 is appropriate. In the experiment we found that the distribution trends were different when C and SNR changed, but the variation of color accuracy was not large. Therefore we simply use L=50 in the following experiments.

Figure 4 shows the selected training sets Ω for the given candidates. As expected, the shapes of the training samples are similar to those of the candidate samples.

 figure: Fig. 4.

Fig. 4. Two typical examples of the training sets for the given candidate samples. Channel number C=6 and SNR=50.

Download Full Size | PPT Slide | PDF

Table 2 compares the color accuracy of the proposed adaptive Wiener estimation with the tradition Wiener estimation and the optimization method. In the experiment, the three-dimensional response space of the original optimization method [12] is extended to 6-, 7-, 11-, and 16-dimensional, respectively. It is found that the color errors of the optimization method are generally higher than those of the Wiener estimation and the proposed method, especially when SNR is low. This indicates that the optimization method doesn’t offer good performance for the multispectral imaging system in this study. In the following, we compare the color accuracy of the proposed method and the Wiener estimation method in detail.

Tables Icon

Table 2. Comparison of the spectral and colorimetric errors of the proposed adaptive Wiener estimation (AWE), the traditional Wiener estimation (WE), and the optimized method (Opt.) [12] when using synthetic data.

In the case of 6 and 7 channels, the proposed method outperforms the Wiener estimation in terms of both spectral rms error and color difference error, at a significant level of p=0.01. In the case of 11 channels, the spectral rms error of the proposed method is lower than that of the Wiener estimation, but the color difference error is slightly larger when SNR=∞ and SNR=50. This seeming contradiction is due to the nonlinear transform between reflectance and CIELAB space, and also the nonlinear operation involved in the color difference formula calculation. In the case of 16 channels, the spectral errors of proposed method are similar to or even slightly larger than those of the Wiener estimation. The maximum spectral rms errors and color difference errors of the proposed method are usually larger than those of the Wiener estimation for different channel numbers and noise levels. It is interesting to observe that in the case of 11 and 16 channels, both of the spectral and colorimetric errors are lower than those of the Wiener estimation when SNR=40. As a whole, it can be conclude from Table 2 that the superiority of proposed method is obvious when the channel response information is not very complete or degraded by noise, or more specifically, when channel number is small or SNR is low.

5.2. Real data

In the experiment of real data, the actual responses of the imaging system were used. The color accuracies of the proposed method and the traditional Wiener estimation were investigated using different channel numbers. The noise variances were estimated according to Eq. (9). The spectral rms errors and color difference errors are listed in Table 3. It is observed that the proposed method outperforms Wiener estimation in the cases of 6 and 7 channels. For the case of 11 and 16 channels, the color accuracies of the proposed method are close to those of the Wiener estimation. This finding is consistent with that in synthetic data experiment part. Figure 5 shows typical reconstructed reflectances using the proposed method and the Wiener estimation when channel number C=7. It is found that the reflectances reconstructed by the proposed adaptive Wiener estimation are more accurate than those of the traditional Wiener estimation.

Tables Icon

Table 3. Comparison of the spectral and colorimetric errors of the proposed adaptive Wiener estimation (AWE) and the traditional Wiener estimation (WE) when using real data.

 figure: Fig. 5.

Fig. 5. Reflectance reconstruction of the proposed adaptive Wiener estimation and traditional Wiener estimation when using real data. Channel number C=7.

Download Full Size | PPT Slide | PDF

6. Conclusion

This paper proposes an improved Wiener estimation method for the reconstruction of spectral reflectances without a prior knowledge of the samples being imaged, by the adaptive selection of training samples. The performance of the proposed adaptive Wiener estimation is investigated for the different channel numbers and SNR levels. Experimental results indicate that the proposed method is significantly better than the Wiener estimation when the channel number is not large (for example, 6 or 7), while is slightly better than or close to the traditional Wiener estimation when channel number 11 or more. The proposed method is of potential applications in textile, printing, and other industries.

Acknowledgment

This work was supported by the NSF of China under Grant No. 60602027 and the Hong Kong Research Institute of Textiles and Apparel (HKRITA) under Grant No. ITP/009/07TP.

References and links

1. J. Y. Hardeberg, “Acquisition and reproduction of color images: colorimetric and multispectral approaches,” Ph.D. dissertation (Ecole Nationale Superieure des Telecommunications, 1999).

2. N. Shimano, “Recovery of spectral reflectances of objects being imaged without prior knowledge,” IEEE Trans. Image Process. 15, 1848–1856 (2006). [CrossRef]   [PubMed]  

3. Y. Murakami, T. Obi, M. Yamaguchi, N. Ohyama, and Y. Komiya, “Spectral reflectance estimation from multi-band image using color chart,” Opt. Commun. 188, 47–54 (2001). [CrossRef]  

4. H. L. Shen, J. H. Xin, and S. J. Shao, “Improved reflectance reconstruction for multispectral imaging by combining different techniques,” Opt. Express 15, 5531–5536 (2007). [CrossRef]   [PubMed]  

5. M. Soriano, W. Oblefias, and C. Saloma, “Fluorescence spectrum estimation using multiple color images and minimum negativity constraint,” Opt. Express 10, 1458–1464 (2002). [PubMed]  

6. M. A. López-Álvarez, J. Hernández-András, E. M. Valero, and Javier Romero, “Selecting algorithms, sensors, and linear bases for optimum spectral recovery of skylight,” J. Opt. Soc. Am. A 24, 942–956 (2007). [CrossRef]  

7. J. Y. Hardeberg, F. Schmitt, and H. Brettel, “Multispectral color image capture using liquid crystal tunable filter,” Opt. Eng. 41, 2532–2548 (2002). [CrossRef]  

8. Y. Murakami, T. Obi, M. Yamaguchi, and N. Ohyama, “Nonlinear estimation of spectral reflectance based on Gaussian mixture distribution for color image reproduction,” Appl. Opt. 41, 4840–4847 (2002). [CrossRef]   [PubMed]  

9. H. Haneishi, T. Hasegawa, A. Hosoi, Y. Yokoyama, N. Tsumura, and Y. Miyake, “System design for accurately estimating the spectral reflectance of art paintings,” Appl. Opt. 39, 6621–6632 (2000). [CrossRef]  

10. N. Shimano, “Optimization of spectral sensitivities with Gaussian distribution functions for a color image acquisition device in the presence of noise,” Opt. Eng. 45, 013201 (2006). [CrossRef]  

11. V. Cheung, S. Westland, C. Li, J. Hardeberg, and D. Connah, “Characterization of trichromatic color cameras by using a new multispectral imaging technique,” J. Opt. Soc. Am. A 22, 1231–1240 (2005). [CrossRef]  

12. H. L. Shen and J. H. Xin, “Spectral characterization of a color scanner based on optimized adaptive estimation,” J. Opt. Soc. Am. A 23, 1566–1569 (2006). [CrossRef]  

13. J. M. DiCarlo and B. A. Wandell, “Spectral estimation theory: beyond linear but before Bayesian,” J. Opt. Soc. Am. A , 20, 1261–1270 (2003). [CrossRef]  

14. K. Barnard and B. Funt, “Camera characterization for color research,” Color Res. Appl. 27, 152–163 (2002). [CrossRef]  

15. M. R. Luo, G. Cui, and B. Rigg, “The development of the CIE 2000 colour-difference formula: CIEDE2000,” Color Res. Appl. 26, 340–350 (2001). [CrossRef]  

References

  • View by:

  1. J. Y. Hardeberg, “Acquisition and reproduction of color images: colorimetric and multispectral approaches,” Ph.D. dissertation (Ecole Nationale Superieure des Telecommunications, 1999).
  2. N. Shimano, “Recovery of spectral reflectances of objects being imaged without prior knowledge,” IEEE Trans. Image Process. 15, 1848–1856 (2006).
    [Crossref] [PubMed]
  3. Y. Murakami, T. Obi, M. Yamaguchi, N. Ohyama, and Y. Komiya, “Spectral reflectance estimation from multi-band image using color chart,” Opt. Commun. 188, 47–54 (2001).
    [Crossref]
  4. H. L. Shen, J. H. Xin, and S. J. Shao, “Improved reflectance reconstruction for multispectral imaging by combining different techniques,” Opt. Express 15, 5531–5536 (2007).
    [Crossref] [PubMed]
  5. M. Soriano, W. Oblefias, and C. Saloma, “Fluorescence spectrum estimation using multiple color images and minimum negativity constraint,” Opt. Express 10, 1458–1464 (2002).
    [PubMed]
  6. M. A. López-Álvarez, J. Hernández-András, E. M. Valero, and Javier Romero, “Selecting algorithms, sensors, and linear bases for optimum spectral recovery of skylight,” J. Opt. Soc. Am. A 24, 942–956 (2007).
    [Crossref]
  7. J. Y. Hardeberg, F. Schmitt, and H. Brettel, “Multispectral color image capture using liquid crystal tunable filter,” Opt. Eng. 41, 2532–2548 (2002).
    [Crossref]
  8. Y. Murakami, T. Obi, M. Yamaguchi, and N. Ohyama, “Nonlinear estimation of spectral reflectance based on Gaussian mixture distribution for color image reproduction,” Appl. Opt. 41, 4840–4847 (2002).
    [Crossref] [PubMed]
  9. H. Haneishi, T. Hasegawa, A. Hosoi, Y. Yokoyama, N. Tsumura, and Y. Miyake, “System design for accurately estimating the spectral reflectance of art paintings,” Appl. Opt. 39, 6621–6632 (2000).
    [Crossref]
  10. N. Shimano, “Optimization of spectral sensitivities with Gaussian distribution functions for a color image acquisition device in the presence of noise,” Opt. Eng. 45, 013201 (2006).
    [Crossref]
  11. V. Cheung, S. Westland, C. Li, J. Hardeberg, and D. Connah, “Characterization of trichromatic color cameras by using a new multispectral imaging technique,” J. Opt. Soc. Am. A 22, 1231–1240 (2005).
    [Crossref]
  12. H. L. Shen and J. H. Xin, “Spectral characterization of a color scanner based on optimized adaptive estimation,” J. Opt. Soc. Am. A 23, 1566–1569 (2006).
    [Crossref]
  13. J. M. DiCarlo and B. A. Wandell, “Spectral estimation theory: beyond linear but before Bayesian,” J. Opt. Soc. Am. A,  20, 1261–1270 (2003).
    [Crossref]
  14. K. Barnard and B. Funt, “Camera characterization for color research,” Color Res. Appl. 27, 152–163 (2002).
    [Crossref]
  15. M. R. Luo, G. Cui, and B. Rigg, “The development of the CIE 2000 colour-difference formula: CIEDE2000,” Color Res. Appl. 26, 340–350 (2001).
    [Crossref]

2007 (2)

2006 (3)

N. Shimano, “Recovery of spectral reflectances of objects being imaged without prior knowledge,” IEEE Trans. Image Process. 15, 1848–1856 (2006).
[Crossref] [PubMed]

H. L. Shen and J. H. Xin, “Spectral characterization of a color scanner based on optimized adaptive estimation,” J. Opt. Soc. Am. A 23, 1566–1569 (2006).
[Crossref]

N. Shimano, “Optimization of spectral sensitivities with Gaussian distribution functions for a color image acquisition device in the presence of noise,” Opt. Eng. 45, 013201 (2006).
[Crossref]

2005 (1)

2003 (1)

2002 (4)

2001 (2)

Y. Murakami, T. Obi, M. Yamaguchi, N. Ohyama, and Y. Komiya, “Spectral reflectance estimation from multi-band image using color chart,” Opt. Commun. 188, 47–54 (2001).
[Crossref]

M. R. Luo, G. Cui, and B. Rigg, “The development of the CIE 2000 colour-difference formula: CIEDE2000,” Color Res. Appl. 26, 340–350 (2001).
[Crossref]

2000 (1)

Barnard, K.

K. Barnard and B. Funt, “Camera characterization for color research,” Color Res. Appl. 27, 152–163 (2002).
[Crossref]

Brettel, H.

J. Y. Hardeberg, F. Schmitt, and H. Brettel, “Multispectral color image capture using liquid crystal tunable filter,” Opt. Eng. 41, 2532–2548 (2002).
[Crossref]

Cheung, V.

Connah, D.

Cui, G.

M. R. Luo, G. Cui, and B. Rigg, “The development of the CIE 2000 colour-difference formula: CIEDE2000,” Color Res. Appl. 26, 340–350 (2001).
[Crossref]

DiCarlo, J. M.

Funt, B.

K. Barnard and B. Funt, “Camera characterization for color research,” Color Res. Appl. 27, 152–163 (2002).
[Crossref]

Haneishi, H.

Hardeberg, J.

Hardeberg, J. Y.

J. Y. Hardeberg, F. Schmitt, and H. Brettel, “Multispectral color image capture using liquid crystal tunable filter,” Opt. Eng. 41, 2532–2548 (2002).
[Crossref]

J. Y. Hardeberg, “Acquisition and reproduction of color images: colorimetric and multispectral approaches,” Ph.D. dissertation (Ecole Nationale Superieure des Telecommunications, 1999).

Hasegawa, T.

Hernández-András, J.

Hosoi, A.

Komiya, Y.

Y. Murakami, T. Obi, M. Yamaguchi, N. Ohyama, and Y. Komiya, “Spectral reflectance estimation from multi-band image using color chart,” Opt. Commun. 188, 47–54 (2001).
[Crossref]

Li, C.

López-Álvarez, M. A.

Luo, M. R.

M. R. Luo, G. Cui, and B. Rigg, “The development of the CIE 2000 colour-difference formula: CIEDE2000,” Color Res. Appl. 26, 340–350 (2001).
[Crossref]

Miyake, Y.

Murakami, Y.

Y. Murakami, T. Obi, M. Yamaguchi, and N. Ohyama, “Nonlinear estimation of spectral reflectance based on Gaussian mixture distribution for color image reproduction,” Appl. Opt. 41, 4840–4847 (2002).
[Crossref] [PubMed]

Y. Murakami, T. Obi, M. Yamaguchi, N. Ohyama, and Y. Komiya, “Spectral reflectance estimation from multi-band image using color chart,” Opt. Commun. 188, 47–54 (2001).
[Crossref]

Obi, T.

Y. Murakami, T. Obi, M. Yamaguchi, and N. Ohyama, “Nonlinear estimation of spectral reflectance based on Gaussian mixture distribution for color image reproduction,” Appl. Opt. 41, 4840–4847 (2002).
[Crossref] [PubMed]

Y. Murakami, T. Obi, M. Yamaguchi, N. Ohyama, and Y. Komiya, “Spectral reflectance estimation from multi-band image using color chart,” Opt. Commun. 188, 47–54 (2001).
[Crossref]

Oblefias, W.

Ohyama, N.

Y. Murakami, T. Obi, M. Yamaguchi, and N. Ohyama, “Nonlinear estimation of spectral reflectance based on Gaussian mixture distribution for color image reproduction,” Appl. Opt. 41, 4840–4847 (2002).
[Crossref] [PubMed]

Y. Murakami, T. Obi, M. Yamaguchi, N. Ohyama, and Y. Komiya, “Spectral reflectance estimation from multi-band image using color chart,” Opt. Commun. 188, 47–54 (2001).
[Crossref]

Rigg, B.

M. R. Luo, G. Cui, and B. Rigg, “The development of the CIE 2000 colour-difference formula: CIEDE2000,” Color Res. Appl. 26, 340–350 (2001).
[Crossref]

Romero, Javier

Saloma, C.

Schmitt, F.

J. Y. Hardeberg, F. Schmitt, and H. Brettel, “Multispectral color image capture using liquid crystal tunable filter,” Opt. Eng. 41, 2532–2548 (2002).
[Crossref]

Shao, S. J.

Shen, H. L.

Shimano, N.

N. Shimano, “Recovery of spectral reflectances of objects being imaged without prior knowledge,” IEEE Trans. Image Process. 15, 1848–1856 (2006).
[Crossref] [PubMed]

N. Shimano, “Optimization of spectral sensitivities with Gaussian distribution functions for a color image acquisition device in the presence of noise,” Opt. Eng. 45, 013201 (2006).
[Crossref]

Soriano, M.

Tsumura, N.

Valero, E. M.

Wandell, B. A.

Westland, S.

Xin, J. H.

Yamaguchi, M.

Y. Murakami, T. Obi, M. Yamaguchi, and N. Ohyama, “Nonlinear estimation of spectral reflectance based on Gaussian mixture distribution for color image reproduction,” Appl. Opt. 41, 4840–4847 (2002).
[Crossref] [PubMed]

Y. Murakami, T. Obi, M. Yamaguchi, N. Ohyama, and Y. Komiya, “Spectral reflectance estimation from multi-band image using color chart,” Opt. Commun. 188, 47–54 (2001).
[Crossref]

Yokoyama, Y.

Appl. Opt. (2)

Color Res. Appl. (2)

K. Barnard and B. Funt, “Camera characterization for color research,” Color Res. Appl. 27, 152–163 (2002).
[Crossref]

M. R. Luo, G. Cui, and B. Rigg, “The development of the CIE 2000 colour-difference formula: CIEDE2000,” Color Res. Appl. 26, 340–350 (2001).
[Crossref]

IEEE Trans. Image Process. (1)

N. Shimano, “Recovery of spectral reflectances of objects being imaged without prior knowledge,” IEEE Trans. Image Process. 15, 1848–1856 (2006).
[Crossref] [PubMed]

J. Opt. Soc. Am. A (4)

Opt. Commun. (1)

Y. Murakami, T. Obi, M. Yamaguchi, N. Ohyama, and Y. Komiya, “Spectral reflectance estimation from multi-band image using color chart,” Opt. Commun. 188, 47–54 (2001).
[Crossref]

Opt. Eng. (2)

J. Y. Hardeberg, F. Schmitt, and H. Brettel, “Multispectral color image capture using liquid crystal tunable filter,” Opt. Eng. 41, 2532–2548 (2002).
[Crossref]

N. Shimano, “Optimization of spectral sensitivities with Gaussian distribution functions for a color image acquisition device in the presence of noise,” Opt. Eng. 45, 013201 (2006).
[Crossref]

Opt. Express (2)

Other (1)

J. Y. Hardeberg, “Acquisition and reproduction of color images: colorimetric and multispectral approaches,” Ph.D. dissertation (Ecole Nationale Superieure des Telecommunications, 1999).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. The spectral responsivities of the 31 channels of the multispectral imaging system.
Fig. 2.
Fig. 2. Reflectances of the 1296 Munsell color chips.
Fig. 3.
Fig. 3. The distribution of average reflectance rms error (left) and average color difference error (right) with respect to the training sample number L. Channel number C=6 and SNR=50.
Fig. 4.
Fig. 4. Two typical examples of the training sets for the given candidate samples. Channel number C=6 and SNR=50.
Fig. 5.
Fig. 5. Reflectance reconstruction of the proposed adaptive Wiener estimation and traditional Wiener estimation when using real data. Channel number C=7.

Tables (3)

Tables Icon

Table 2. The corresponding channel sequences of different channel number C

Tables Icon

Table 2. Comparison of the spectral and colorimetric errors of the proposed adaptive Wiener estimation (AWE), the traditional Wiener estimation (WE), and the optimized method (Opt.) [12] when using synthetic data.

Tables Icon

Table 3. Comparison of the spectral and colorimetric errors of the proposed adaptive Wiener estimation (AWE) and the traditional Wiener estimation (WE) when using real data.

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

v c = l ( λ ) r ( λ ) f c ( λ ) S ( λ ) + b c + n c = m c ( λ ) r ( λ ) d λ + b c + n c ,
v = Mr + b + n ,
m ij 0
u = Mr + n .
r ̂ = Wu ,
W WE = K r M T ( MK r M T + K n ) 1 ,
K r = E { rr T } ,
K n = diag { σ 1 2 , σ 2 2 , , σ C 2 } .
σ ̂ c 2 = E { u c m c r 2 } ,
d i = α mean { r i r i r ̂ r ̂ } + ( 1 α ) max { r i r i r ̂ r ̂ } ,
Ω = { r 1 r 1 k 1 times , , r i r i k i times , , r L 1 times } ,
k i = ( d L d i ) γ + 0.5 ,
W AWE = K r , Ω M T ( MK r , Ω M T + K n ) 1 ,
rms = ( ( r r ̂ ) T ( r r ̂ ) N ) 1 2 .
u syn = Mr + n σ ,
SNR = 10  log ( Tr ( MK r M T ) σ 2 ) ,

Metrics