Abstract

A sequential weighted nonlinear regression technique from digital camera responses is proposed for spectral reflectance estimation. The method consists of two stages taking colorimetric and spectral errors between training set and target set into accounts successively. Based on polynomial expansion model, local optimal training samples are adaptively employed to recover spectral reflectance as accurately as possible. The performance of the method is compared with several existing methods in the cases of simulated camera responses under three kinds of noise levels and practical camera responses under the self as well as cross test conditions. Results show that the proposed method is able to recover spectral reflectance with a higher accuracy than other methods considered.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In the past few years, there has been a considerable interest to study multispectral imaging to reconstruct the spectral information of samples. On the one hand, the reflectance spectra defined as the ‘fingerprint’ of object express color more accurately than tricolor information and can avoid metameric issues, so they have been used for color measurement and color quality control in paint, plastic, inks and textile industries. Besides, objects analysis and visualization, such as cultural heritage [1] and medical diagnosis [24], also employ reflectance spectra. Specifically, as a noninvasive method, multispectral imaging provides reflectance spectra of each substance applied to identify pigment [1]; the spectral representation of skin color is important for the diagnosis of cutaneous diseases [2]; the vein visualization technology based on multispectral estimation can even be used for various point-of-care applications, such as needle insertion for obese patients, children and elderly people instead of costly and time-consuming ultrasound imaging [4]. Since spectral data are essential for many applications, studying a methodology recovering reflectance spectra accurately seems to be meaningful.

There are a variety of spectral characterization techniques, such as wiener estimation, pseudo-inverse estimation, finite-dimensional modeling, Matrix R method, principal component analysis (PCA), independent component analysis (ICA), kernel methods, other linear models and nonlinear models (polynomial, tetrahedral models and neural networks). Among these methods, wiener estimation and finite-dimensional modeling need to instrumentally measure or estimate camera sensitivities. To our knowledge, spectral sensitivities of camera are inconvenient to be obtained directly through professional instruments, while estimating mathematically by indirect method not only increases the complexity of the algorithm, but also causes the secondary propagation of the errors. Compared to the two methods, the remaining methods recovering spectral reflectance without any a priori knowledge about imaging system are more practical and more widely concerned. However, classical methods on the basis of pseudo-inverse estimation, matrix R, principal component analysis, and independent component analysis are simple and straight but not very accurate. Then, various modifications of these traditional techniques have been proposed. Xiao et al. [5] obtained basis functions by principal component analysis, and built the polynomial model to map RGB to reflectance basis weights to predict the reflectance. Heikkinen [6] and Shen [7] proposed a general regularization framework and a partial least-squares regression method, respectively, for robust reflectance estimation. Heikkinen [8,9] and Eckhard [10] utilized different kernel-based regression models for reflectance estimation with relatively high accuracy. Amiri [11] found that spectral and colorimetric error of the recovery could be reduced via weighted nonlinear regression. All of aforementioned methods considered are global methods. Recently, many methods have been improved by concentrating on local solutions. Zhang et al. [12] divided the spectral reflectance space into 11 subgroups, and the extended principal components of the corresponding subgroup samples were used to reconstruct the spectral reflectance. Bianco [13] chose the metamer with the most similar shape to the reflectance available to recover the reflectance spectra with the desired tristimulus values. Babaei [14] employed weighting matrices to improve the pseudo-inverse estimation for reconstruction of reflectance. Zhang [15] approximated the reflectance of the testing sample by the linear combination of k reflectances from the training set that have similar camera responses to the testing sample. Liang [16,17] proposed a local-weighted non-linear regression model based on camera responses and a local-weighted linear regression model based on raw camera responses to estimate spectral reflectance. These methods with weighted regression are most promising in terms of accuracy, but they always select and weight training samples only by color differences between testing samples and training samples and ignore their spectra differences. In the viewing of this, to further diminish colorimetric and spectral errors between the estimated and actual spectra, it is necessary to develop an optimized method for adaptively selecting and weighting training samples according to the colorimetric and spectral characteristics.

In this work, we propose a sequential weighted nonlinear regression technique for spectral reflectance from digital raw camera responses. The method consists of two stages to enhance the performance of reflectance estimation as much as possible. Colorimetric vector angles and spectral errors are utilized to select and weight the training samples in succession. The performances of the proposed method and several existing methods are compared via both simulated camera system under different noise levels and practical camera system under the self as well as cross test conditions. The experimental results show the superiority of our method over others in terms of colorimetric and spectral accuracy.

2. Spectral imaging model

In the human visual system, an image is formed by light focused onto the retina with cones which are sensitive mainly to light containing long, middle and short wavelengths, respectively [18]. With the similar basic principle, the color filters of a camera work like cones. The responses of a camera with three channels are dependent on the spectral power distribution of light source $l(\lambda )$, surface reflectance $r(\lambda )$, camera sensitivity functions ${c_k}(\lambda )$ and system noise ${n_k}$. The image value ${u_k}$ can be written to be a simple imaging model, as Eq. (1)

$${u_k} = \int_\omega ^{} {l(\lambda )} r(\lambda ){c_k}(\lambda )d\lambda + {n_k},k \in \{ R,G,B\} .$$
where $\omega $ denotes the visible spectrum. For simplicity, ${n_k} = 0$ is assumed. Equation (1) can be written as a matrix equation:
$${\textbf u} = {\textbf{Mr}}.$$
where ${\textbf u}$ denotes the response vector of three channels, ${\textbf M}$ represents the spectral responsivity including the spectral power distribution of light source and the camera spectral sensitivity, and ${\textbf r}$ denotes the spectral reflectance.

3. Proposed method

Some basic nomenclature is first introduced before the proposed method is described in detail. The reconstruction of spectral reflectance is to estimate the reflectance $\hat{{\textbf r}}$ by the following model:

$$\hat{{\textbf r}} = {\textbf{Qu}}.$$
where ${\textbf u}$ is the camera response vector as Eq. (2); ${\textbf Q}$ is the transformation matrix. Plainly, once ${\textbf Q}$ is known, the estimation of reflectance becomes very easy. To obtain ${\textbf Q}$, training samples are adopted to conduct a learning procedure. Reflectance and the camera’s response values of training samples have the same relationship as Eq. (3), provided in Eq. (4):
$${\textbf{R = QU}}.$$
where ${\textbf R}$ denotes the spectral reflectance matrix of training samples, and ${\textbf U}$ denotes the camera responses matrix of training samples. By minimizing the least-square error between the actual reflectance and the estimated reflectance of training samples, the transformation matrix can be solved through pseudo-inverse and wiener estimation technique as Eq. (5).
$${\textbf Q} = ({\textbf R}{{\textbf U}^T}){({\textbf U}{{\textbf U}^T})^{ - 1}}.$$
where the superscript ‘T’ indicates the matrix transpose, and the superscript ‘-1’ represents the matrix inverse.

3.1 Acquiring raw camera responses

Since images in raw formats contain the unprocessed sensor data that truly correspond to the observational model in Eq. (1), raw images are used to estimate spectral reflectance. After capturing raw images, firstly, it is necessary to convert them into easily read TIFF files via the open-source software dcraw or the converting software designed by the camera’s manufacturer. Then, three-channel RGB raw images can be generated by a de-mosaicing algorithm, for example, MATLAB’s built-in de-mosaicing function.

3.2 Weighting training samples based on colorimetric vector angle

3.2.1 Colorimetric transformation

RGB signals generated by a camera are device-dependent and non-uniform. Namely, the RGB values are not suitable for color evaluation. It is necessary to transform raw RGB values into CIE XYZ values. Therefore, training samples are first used to derive a transformation matrix to transform device-dependent camera RGB color space into device-independent CIE XYZ color space. Based on the least squares method, a 3×3 transform matrix ${\textbf P}$ can be determined by solving Eq. (6)

$${\textbf P} = ({{\textbf H}_{\textrm{train}}}{\textbf U}_{\textrm{train}}^T){({{\textbf U}_{\textrm{train}}}{\textbf U}_{\textrm{train}}^T)^{ - 1}}.$$
where ${{\textbf U}_{\textrm{train}}}$ is the normalized raw camera responses matrix of training samples; ${{\textbf H}_{\textrm{train}}}$ denotes the CIE XYZ matrix of training samples; the superscript ‘T’ indicates the matrix transpose; and the superscript ‘-1’ represents the matrix inverse. Then, CIE XYZ values of the target sample is predicted using transformation matrix ${\textbf P}$ with known normalized raw camera response values, as shown in Eq. (7)
$${{\textbf x}_{\textrm{target}}} = {\textbf P}{{\textbf u}_{\textrm{target}}}.$$
where xtarget denotes the CIE XYZ values vector of the target sample; utarget denotes the normalized raw responses vector of the target sample.

3.2.2 Calculating weighting matrix

Training samples similar to target samples play a more important role in reflectance estimation. In order to improve estimation accuracy, selecting training samples and assigning appropriate weights by color differences are proved to be effective approaches [11,16,17]. However, in some cases, with bigger color differences, the spectral shapes of some training samples and the target sample are still very similar. Therefore, colorimetric vector angle seems to be a more practical and promising parameter to select and weight training samples. The parameter treats CIE XYZ values of training samples and target samples as vectors. The two vectors ${\textbf x}$ and ${\textbf y}$ are used to store CIE XYZ values of the target sample and the training sample, respectively. The colorimetric vector angle of the ith training sample and the target sample is calculated by Eq. (8)

$${\theta _i} = \arccos (\frac{{ < {\textbf x},{\textbf y} > }}{{||{\textbf x} ||||{\textbf y} ||}}),0 \le {\theta _i} \le \frac{\pi }{2},i \in \{ 1,2, \cdots ,N\textrm{\} }\textrm{.}$$
where N denotes the number of training samples. The colorimetric vector angle represents the difference of the two samples by the vector direction rather than the vector length. It means the smaller the angle is, the higher the similarity of the two samples is. Therefore, the training samples are sorted in ascending order according to their colorimetric vector angles with the target sample. The first p (1 ≤ p ≤ N) training samples are selected as the local optimal training samples. Since there is no pre-determined optimal value for p, so it must be determined by validation experiment. Then, the weighting coefficient wj is defined for each selected local optimal training sample by Eq. (9)
$${w_j} = \frac{1}{{{\theta _j} + \mu }},j \in \{ 1,2, \cdots ,p\textrm{\} }\textrm{.}$$
where the subscript j refers to the jth sample of local optimal training samples; θj denotes the colorimetric vector angle between the jth local optimal training sample and the target sample; μ is a very small amount in case θj is equal to 0. In this work, μ = 0.0001. Weighting coefficients of all training samples are sorted in descending order in a diagonal matrix W to form the weighting matrix as follows:
$${\textbf W} = {\left[ {\begin{array}{cccc} {{w_1}}&0& \cdots &0\\ 0&{{w_2}}&0& \vdots \\ \vdots &0& \ddots &0\\ 0& \cdots &0&{{w_p}} \end{array}} \right]_{p \times p}}.$$

3.3 Responses expansion

The estimation accuracy increases with the number of terms of the polynomial model [19]. Therefore, terms rg, rb, gb, r2, g2, b2, rgb, r2g, g2b, rb2, r2b, rg2, gb2……are tried to be added to the normalized raw camera responses vector ${\textbf u}$ and matrix ${\textbf U}$. Together with a constant term 1, high-order polynomial terms are adopted to transform linear regression into nonlinear one. However, after the number of terms reaches a certain value, worse result or no significant improvement can be observed [11,16]. Amiri [11] and Liang [16] used the nonlinear polynomial regression with 17 terms, 18 terms and 20 terms, respectively, for spectral reflectance estimation; in this work, it was found that the polynomial with 10 terms gave more accurate results. Therefore, in this work, the normalized raw responses are expanded as Eq. (11)

$${{\textbf u}_{\exp }} = [{1{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} r{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} g{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} b{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} rg{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} rb{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} gb{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {r^2}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {g^2}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {b^2}{\kern 1pt} {\kern 1pt} } ]{{\kern 1pt} ^T}.$$
where uexp is the 10×1 vector of the expanded normalized raw camera responses; r, g and b denote the normalized raw camera responses for R-, G- and B-channel of a pixel; and the superscript ‘T’ indicates the matrix transpose.

3.4 Estimating spectral reflectance of the target sample

Since all normalized raw camera responses are expanded, an adaptive transformation matrix ${{\textbf Q}_{\textrm{ada}}}$ can be constructed as Eq. (12)

$${{\textbf Q}_{\textrm{ada}}} = ({\tilde{{\textbf R}}_{\textrm{train}}}\tilde{{\textbf U}}_{\textrm{train,exp}}^T){({\tilde{{\textbf U}}_{\textrm{train, exp}}}\tilde{{\textbf U}}_{\textrm{train,exp}}^T)^{ - 1}}.$$
where ${\tilde{{\textbf R}}_{\textrm{train}}} = {\textbf W}{{\textbf R}_{\textrm{train}}}$, ${\tilde{{\textbf U}}_{\textrm{train, exp}}} = {\textbf W}{{\textbf U}_{\textrm{train, exp}}}$, ${\tilde{{\textbf R}}_{\textrm{train}}}$ and ${\tilde{{\textbf U}}_{\textrm{train, exp}}}$ are generated by projecting the weighting matrix ${\textbf W}$ on ${{\textbf R}_{\textrm{train}}}$ (the spectral reflectance matrix of local training samples) and ${{\textbf U}_{\textrm{train, exp}}}$ (the expanded normalized raw responses matrix of local training samples), respectively. Then, the reflectance of the target sample can be calculated via Eq. (13)
$${\hat{{\textbf r}}_{\textrm{target}}} = {{\textbf Q}_{\textrm{ada}}}{{\textbf u}_{\textrm{target, exp}}}.$$
where ${\hat{{\textbf r}}_{\textrm{target}}}$ denotes the estimated spectral reflectance of the target sample; ${{\textbf u}_{\textrm{target, exp}}}$ denotes the expanded normalized raw responses vector of the target sample.

3.5 Weighting training samples based on spectral error

The training sample with reflectance ri can be weighted again according to its spectral similarity to the reflectance of the target sample derived from the previous step. In order to obtain the spectral similarity, the root-mean-square error (RMSE) between the previous reconstructed reflectance of the target sample and the ith training sample is calculated by Eq. (14)

$$\textrm{RMS}{\textrm{E}_i} = \sqrt {\frac{1}{n}{{({{\hat{{\textbf r}}}_{\textrm{target}}} - {{\textbf r}_i})}^T}({{\hat{{\textbf r}}}_{\textrm{target}}} - {{\textbf r}_i})} ,i \in \{ 1,2, \cdots ,N\} .$$
where n represents the number of sampling points in the visible spectrum from 400 nm to 700 nm. In this work, n = 31. The training samples are sorted in ascending order according to their root-mean-square errors with the target sample. The first q (1 ≤ q ≤ N) training samples are selected as the local optimal training samples (q must be determined by validation experiment). Because spectral similarity is inversely proportional to RMSE, the weighting coefficient wk for each selected local optimal training sample can be calculated by Eq. (15) [20]
$${w_k} = \frac{1}{{\textrm{RMS}{\textrm{E}_k} + \varepsilon }},k \in \{ 1,2, \cdots ,q\} .$$
where the subscript k refers to the kth sample of local optimal training samples; RMSEk denotes the spectral difference between the kth local optimal training sample and the target sample; ε is a very small amount in case $\textrm{RMS}{\textrm{E}_k}$ is equal to 0. In this work, ε = 0.0001. The weighting coefficients of the q training samples are placed in a diagonal matrix to generate a new weighting matrix ${{\textbf W}^{\prime}}$.

3.6 Re-estimating spectral reflectance of the target sample

${\tilde{{\textbf R}}_{\textrm{train}}}$ and ${\tilde{{\textbf U}}_{\textrm{train, exp}}}$ in Eq. (12) are updated after new ${{\textbf R}_{\textrm{train}}}$ and ${{\textbf U}_{\textrm{train,exp}}}$ are multiplied by ${{\textbf W}^{\prime}}$. A new adaptive transformation matrix ${{\textbf Q}_{\textrm{ada}}}^{\prime}$ is calculated out via Eq. (12) using new ${\tilde{{\textbf R}}_{\textrm{train}}}$ and ${\tilde{{\textbf U}}_{\textrm{train}}}$.

Finally, the spectral reflectance of the target sample can be re-estimated by Eq. (16)

$${\hat{{\textbf r}}_{\textrm{target}}}^{\prime} = {{\textbf Q}_{\textrm{ada}}}^{\prime}{{\textbf u}_{\textrm{target, exp}}}.$$
where ${\hat{{\textbf r}}_{\textrm{target}}}^{\prime}$ denotes the re-estimated spectral reflectance of the target sample; ${{\textbf u}_{\textrm{target, exp}}}$ denotes the expanded normalized raw responses vector of the target sample.

4. Experiment

To demonstrate the performance of our method, experiments were carried out based on both simulated and real data.

4.1 Simulated experiments

In the simulated experiments, RGB values of 1269 Munsell matte color chips were simulated by using Nikon D5100 camera and CIE standard illuminant D65 based on the mathematical model as Eq. (1). Figure 1(a) and Fig. 1(b) show the spectral sensitivity of the camera and the spectral power distribution of the light source, ranging from 400 to 700 nm [21]. Reflectance data have been measured by Perkin-Elmer lambda 9 UV/VIS/NIR spectrophotometer in the range of 380 to 800 nm with 1 nm sampling by Hiltunen et al. [22]. Corresponding reflectance ranging from 400 to 700 nm was extracted at 10-nm intervals.

 figure: Fig. 1.

Fig. 1. (a) The spectral sensitivity of the camera, and (b) the spectral power distribution of the light source.

Download Full Size | PPT Slide | PDF

Additive normally distributed noise with different levels was added to the three camera channels to simulate real imaging system. The signal-to-noise ratio (SNR) can be calculated by Eq. (17)

$$\textrm{SNR} = 10{\log _{10}}(\frac{{\textrm{Tr}({\textbf M}{{\textbf K}_r}{{\textbf M}^T})}}{{{\sigma ^2}}}).$$
where ${\textbf M}$ represents spectral responsivity matrix as Eq. (2), the autocorrelation matrix ${{\textbf K}_r}$ can be calculated by all Munsell Matte color chips, and ${\sigma ^2}$ denotes the variance of Gaussian noise with zero mean. The spectral estimation performances under three noise levels: ∞, 50 and 30 were studied. ∞ denotes that no noise is added to the ideal camera responses. The noise variance ${\sigma ^2}$ of other two SNRs can be calculated by transforming Eq. (17).

1269 Munsell matte color chips were randomly divided into three groups: training set, validation set, and testing set. Training set size was chosen to be 635 samples, and both validation and testing set sizes were 317 samples. The training-estimating cycle was repeated 50 times with randomly partitioned samples. For each trial, transformation matrix was calculated by using data from training set. Since the number of local optimal training samples had an influence on the transforming results during two stages of spectral estimation, validation set was employed to determine the two numbers of local optimal training samples via minimizing mean spectral error. Then, testing set was employed to evaluate the properties of the proposed method.

4.2 Practical experiments

In the practical experiments, the actual raw responses of the imaging system were used. Three color charts were adopted to verify the proposed method, including X-Rite ColorChecker SG (CCSG, 140 samples), IT8.7-3 color chart (952 samples, printed on Fantanc UH180A digital printing paper by the printer Canon iPF8410) and Agfa IT8.7-2 color chart (288 samples). Figure 2 presents the color specifications of samples in the CIE L*a*b* color space under D65 illuminant and 1964 standard observer. It can be observed that samples are evenly distributed in color space.

 figure: Fig. 2.

Fig. 2. Color distribution of the samples in CIE L*a*b* color space: (a) color distribution of CCSG, (b) color distribution of Agfa IT8.7-3, and (c) color distribution of Agfa IT8.7-2.

Download Full Size | PPT Slide | PDF

The real RGB camera Canon EOS 80D and the spectrophotometer X-Rite CI64 with a software Color iControl were the measurement devices. With fixed focal length of 50 mm, the camera’s f-number was set to F5.6, ISO was set to 200, and the exposure time was 1/10 s. In the room lit by D65 fluorescent lamps, after setting white balance manually via standard gray board, the three color charts’ photographs were taken. The raw ‘CR2’ format files that the camera output were converted to ‘tiff’ format files via the software dcraw. The ‘tiff’ format files were de-mosaiced using MATLAB’s built-in de-mosaicing algorithm. Then, spectral data of the three color charts was measured with the spectrophotometer and sampled from 400 to 700 nm in 10-nm increments. The raw camera response values of the areas about 40×40 pixels corresponding to positions measured by the spectrophotometer with 4 mm aperture were averaged.

The practical experiments were performed in the self as well as cross test conditions. In the self test, CCSG color chips were used as the color targets, with half for training, half of the rest for validating and others for testing, and 50 random trials were conducted for partitioning samples. In the cross test, the whole IT8.7-3 and Agfa IT8.7-2 color chart were used as the training set and validation set, respectively, and the testing set was the same as that in the self test.

4.3 Evaluation procedure

The spectral estimation accuracy of the target set was evaluated by various metrics. Magnitude and shape differences between the estimated and the measured spectra were defined by RMSE as Eq. (14). And goodness fitting coefficient (GFC) as Eq. (18) was used for a complementary metric to evaluate spectral differences.

$$\textrm{GFC} = \frac{{{{\hat{{\textbf r}}}^{\prime T}}{\textbf r}}}{{||{{{\hat{{\textbf r}}}^{\prime T}}{{\hat{{\textbf r}}}^{\prime}}} ||||{{{\textbf r}^T}{\textbf r}} ||}}$$
where ${\hat{{\textbf r}}^{\prime}}$ and ${\textbf r}$ denote the estimated spectral reflectance and the measured spectral reflectance, respectively.

Finally, assuming CIE 1964 10°standard observer and CIE D65 standard illuminant, CIE L*a*b* coordinates of reflectance spectra were calculated. Further, the CIE 1976 L*a*b* color differences were calculated by Eq. (19) to define perceptual differences between the estimated and the measured spectra:

$${\Delta \textrm{E}}_{\textrm{ab}}^{\ast } = \sqrt {{{(\Delta {L^\ast })}^2} + {{(\Delta {a^\ast })}^2} + {{(\Delta {b^\ast })}^2}} .$$
where $\Delta {L^\ast }$, $\Delta {a^\ast }$ and $\Delta {b^\ast }$ represent brightness difference, redness difference, and yellowness difference of the two kinds of spectra, respectively.

Additionally, the mean value of the RMSE measure for validation set was used in search for the number of local optimal training samples.

5. Results and discussion

We have compared the spectral and colorimetric estimation results of our method and several existing methods. These compared methods included regularized least squares (RLS) method [6], regularized local linear model (RLLM) [15], logarithmic kernel method [10], PCA method proposed by Xiao [5], weighted nonlinear regression (WNR) method used by Amiri [11], and local-weighted linear regression (LLR) method proposed by Liang [17]. Especially, for RLS method, RLLM method, logarithmic kernel method, LLR method and the proposed method, all the estimation errors were calculated with the optimal parameters found by minimizing the estimation error on the validation set.

For the simulated data of the Munsell matte color chips, spectral errors and color differences of estimated spectra were evaluated in terms of the mean and maximum (Max) errors. The results are summarized in Table 1 over 50 trials. The best result of each column is reported in bold font. As this table shows, the proposed method leads to a certain reduction of the mean and the maximum values of RMSE and ΔEab*, and a slight increase of the mean values of GFC between the actual and the estimated spectra under different noise conditions. Particularly, for all methods, estimation quality under lower noise level is better than that under higher noise level; LLR method that also uses local-weighted regression achieves the closest result to the proposed method under the lower noise levels, but the performance of LLR method is even slightly worse than some other existing methods under the higher noise level. As a whole, the results with different noise levels have shown that our method can help to improve the accuracy and stability of reflectance estimation.

Tables Icon

Table 1. Comparison of performances of the proposed method and other existing methods for simulated data

Figure 3, Fig. 4 and Fig. 5 graphically show the boxplot distributions of spectral errors and color differences of the proposed method compared with other estimation methods under different noise conditions. In the boxplot, the height of the blue rectangle defines interquartile range (IQR). The bottom line, the red line and the top line of the blue rectangular box indicate the first quartile number (25th percentile), the median number (50th percentile) and the third quartile number (75th percentile) of error dataset, respectively. The bottom black line and the top black line indicate the ‘minimum’ number (the first quartile number-1.5*IQR, not the smallest) and the ‘maximum’ number (the third quartile number+1.5*IQR, not the highest), respectively. The red ‘+’ symbols display surprisingly high maximums called outliers. It can be seen from the following three groups of boxplots: (1) error data from the proposed method is more tightly grouped than those from other methods, which means that the error distribution for different samples of the proposed method is relatively concentrated; (2) whether the noise level is low or high, as a whole, both spectral errors and color differences of the proposed method are minimum; (3) the lower the noise level is, the more obvious the superiority of the proposed method is. Therefore, controlling the noise of camera is beneficial to the better use of this method.

 figure: Fig. 3.

Fig. 3. Boxplots of experiment results based on simulated camera when SNR = ∞: (a) spectral error RMSE, and (b) color difference ΔEab*.

Download Full Size | PPT Slide | PDF

 figure: Fig. 4.

Fig. 4. Boxplots of experiment results based on simulated camera when SNR = 50: (a) spectral error RMSE, and (b) color difference ΔEab*.

Download Full Size | PPT Slide | PDF

 figure: Fig. 5.

Fig. 5. Boxplots of experiment results based on simulated camera when SNR = 30: (a) spectral error RMSE, and (b) color difference ΔEab*.

Download Full Size | PPT Slide | PDF

The results for the practical data with different training and validation sets to estimate CCSG are summarized in Table 2 over 50 trials. The best result of each column is reported in bold font. As can be seen from Table 2, in general, the proposed method generates the smallest RMSE and ΔEab*, whereas the largest corresponding GFC no matter whether the training, validation and testing samples are from the same color chart or not. This result is consistent with that in the simulated data experiments. However, the inconformity in the best result between spectral accuracy and colorimetric accuracy can be found. Specifically, in the self test, though the maximum spectral error of the proposed method is lowest, the maximum color difference of the kernel method is slightly lower than that of the proposed method. It indicates that since different evaluative metrics have different emphases, ΔEab* measure is not completely consistent with RMSE measure, which has been noted in [6,8]. Besides, though the number of CCSG color chips is significantly fewer than IT8.7/3 color chips, it is noted that all the methods perform better for CCSG training set than for IT8.7/3 training set. This finding shows that the estimated spectral accuracies of all the methods are also affected by the medium [5,19].

Tables Icon

Table 2. Comparison of performances of the proposed method and other existing methods for practical data

Figure 6 and Fig. 7 illustrate the boxplot distributions of spectral errors and color differences of the proposed method compared with other estimation methods based on the practical data. It intuitively illustrates that spectral errors and color differences of the proposed method are minimum. Therefore, the practical experiments have the similar results to the simulated experiments. Meanwhile, it is worth mentioning that the superiority of the proposed method over other methods is more obvious in the practical experiments than in the simulated experiments.

 figure: Fig. 6.

Fig. 6. Boxplots of practical experiment results in the self test: (a) spectral error RMSE, (b) color difference ΔEab*.

Download Full Size | PPT Slide | PDF

 figure: Fig. 7.

Fig. 7. Boxplots of practical experiment results in the cross test: (a) spectral error RMSE, and (b) color difference ΔEab*.

Download Full Size | PPT Slide | PDF

To evaluate whether the superiority of the proposed method was statistically significant, a nonparametric test called the Wilcoxon sign test (WST) without requiring the error data to follow particular probability distributions was used to compare the whole error distributions of the proposed method with another method [23]. All the pairwise comparisons between the proposed method and benchmarking methods on the simulated data and the practical data among RMSE, ΔEab* distributions were made at a significance level p = 0.05. Table 3 shows that the whole error distribution differences for which pairwise methods are statistically significant (bold). It can be noticed that under all the imaging conditions, for RMSE, the estimation accuracy of the proposed methods is always significantly better than any benchmarking method; for ΔEab*, the estimation accuracy of the proposed method is also significantly better than most of benchmarking methods.

Tables Icon

Table 3. Comparison of the whole error distributions of the proposed method and other existing methods

6. Conclusion

This work proposed a sequential adaptive estimation method for spectral reflectance based on camera responses by adaptively selecting and weighting training samples for two times. Through the simulated and practical experiments, it was found that the proposed method can accurately obtain spectral reflectance, particularly when the medium of the training and validation set was the same as that of the testing set. The performance of the proposed estimation method was compared with the existing methods for simulated camera under different SNR levels and for real camera. Results indicated that the proposed method performed best with minimum spectral errors and color differences. Hence the method of sequential adaptive estimation is effective in estimating spectral reflectance.

Funding

National Natural Science Foundation of China (61575090, 61775169).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. A. Pelagotti, A. D. Mastio, A. D. Rosa, and A. Piva, “Multispectral imaging of paintings,” IEEE Signal Process. Mag. 25(4), 27–36 (2008). [CrossRef]  

2. A. Leonardi, S. Buonaccorsi, V. Pellacchia, L. M. Moricca, E. Indrizzi, and G. Fini, “Maxillofacial prosthetic rehabilitation using extraoral implants,” J. Craniofac. Surg. 19(2), 398–405 (2008). [CrossRef]  

3. I. Nishidate, T. Maeda, K. Niizeki, and Y. Aizu, “Estimation of melanin and hemoglobin using spectral reflectance images reconstructed from a digital RGB image by the wiener estimation method,” Sensors 13(6), 7902–7915 (2013). [CrossRef]  

4. J. H. Song, C. Kim, and Y. Yoo, “Vein visualization using a smart phone with multispectral wiener estimation for point-of-care applications,” IEEE J. Biomed. Health Inform. 19(2), 773–778 (2015). [CrossRef]  

5. K. Xiao, Y. Zhu, C. Li, D. Connah, J. M. Yates, and S. Wuerger, “Improved method for skin reflectance reconstruction from camera images,” Opt. Express 24(13), 14934–14950 (2016). [CrossRef]  

6. V. Heikkinen, T. Jetsu, J. Parkkinen, M. Hauta-Kasari, T. Jaaskelainen, and S. D. Lee, “Regularized learning framework in the estimation of reflectance spectra from camera responses,” J. Opt. Soc. Am. A 24(9), 2673–2683 (2007). [CrossRef]  

7. H. Shen, H. Wan, and Z. Zhang, “Estimating reflectance from multispectral camera responses based on partial least-squares regression,” J. Electron. Imaging 19(2), 020501 (2010). [CrossRef]  

8. V. Heikkinen, A. Mirhashemi, and J. Alho, “Link functions and Matérn kernel in the estimation of reflectance spectra from RGB responses,” J. Opt. Soc. Am. A 30(11), 2444–2454 (2013). [CrossRef]  

9. V. Heikkinen, C. Cámara, T. Hirvonen, and J. Alho, “Spectral imaging using consumer-level devices and kernel-based regression,” J. Opt. Soc. Am. A 33(6), 1095–1110 (2016). [CrossRef]  

10. T. Eckhard, E. M. Valero, J. Hernández-Andrés, and V. Heikkinen, “Evaluating logarithmic kernel for spectral reflectance estimation—effects on model parametrization, training set size, and number of sensor spectral channels,” J. Opt. Soc. Am. A 31(3), 541–549 (2014). [CrossRef]  

11. M. M. Amiri and M. D. Fairchild, “A strategy toward spectral and colorimetric color reproduction using ordinary digital cameras,” Color Res. Appl. 43(5), 675–684 (2018). [CrossRef]  

12. X. Zhang and H. Xu, “Reconstructing spectral reflectance by dividing spectral space and extending the principal components in principal component analysis,” J. Opt. Soc. Am. A 25(2), 371–378 (2008). [CrossRef]  

13. S. Bianco, “Reflectance spectra recovery from tristimulus values by adaptive estimation with metameric shape correction,” J. Opt. Soc. Am. A 27(8), 1868 (2010). [CrossRef]  

14. V. Babaei, S. H. Amirshahi, and F. Agahian, “Using weighted pseudo-inverse method for reconstruction of reflectance spectra and analyzing the dataset in terms of normality,” Color Res. Appl. 36(4), 295–305 (2011). [CrossRef]  

15. W. Zhang, G. Tang, D. Dai, and A. Nehorai, “Estimation of reflectance from camera responses by the regularized local linear model,” Opt. Lett. 36(19), 3933–3935 (2011). [CrossRef]  

16. J. Liang and X. Wan, “Optimized method for spectral reflectance reconstruction from camera responses,” Opt. Express 25(23), 28273–28287 (2017). [CrossRef]  

17. J. Liang, K. Xiao, M. R. Pointer, X. Wan, and C. Li, “Spectra estimation from raw camera responses based on adaptive local-weighted linear regression,” Opt. Express 27(4), 5165–5180 (2019). [CrossRef]  

18. H. A. Khan, J. Thomas, J. Y. Hardeberg, and O. Laligant, “Illuminant estimation in multispectral imaging,” J. Opt. Soc. Am. A 34(7), 1085–1098 (2017). [CrossRef]  

19. X. Zhang, Q. Wang, J. Li, X. Zhou, Y. Yang, and H. Xu, “Estimating spectral reflectance from camera responses based on CIE XYZ tristimulus values under multi-illuminants,” Color Res. Appl. 42(1), 68–77 (2017). [CrossRef]  

20. M. M. Amiri and S. H. Amirshahi, “A step by step recovery of spectral data from colorimetric information,” J. Opt. 44(4), 373–383 (2015). [CrossRef]  

21. M. M. Darrodi, G. Finlayson, T. Goodman, and M. Mackiewicz, “Reference data set for camera spectral sensitivity estimation,” J. Opt. Soc. Am. A 32(3), 381–391 (2015). [CrossRef]  

22. O. Kohonen, J. Parkkinen, and T. Jääskeläinen, “Databases for spectral color science,” Color Res. Appl. 31(5), 381–390 (2006). [CrossRef]  

23. F. Wilcoxon, “Individual comparisons by ranking methods,” Biometrics 1(6), 80–83 (1945). [CrossRef]  

References

  • View by:

  1. A. Pelagotti, A. D. Mastio, A. D. Rosa, and A. Piva, “Multispectral imaging of paintings,” IEEE Signal Process. Mag. 25(4), 27–36 (2008).
    [Crossref]
  2. A. Leonardi, S. Buonaccorsi, V. Pellacchia, L. M. Moricca, E. Indrizzi, and G. Fini, “Maxillofacial prosthetic rehabilitation using extraoral implants,” J. Craniofac. Surg. 19(2), 398–405 (2008).
    [Crossref]
  3. I. Nishidate, T. Maeda, K. Niizeki, and Y. Aizu, “Estimation of melanin and hemoglobin using spectral reflectance images reconstructed from a digital RGB image by the wiener estimation method,” Sensors 13(6), 7902–7915 (2013).
    [Crossref]
  4. J. H. Song, C. Kim, and Y. Yoo, “Vein visualization using a smart phone with multispectral wiener estimation for point-of-care applications,” IEEE J. Biomed. Health Inform. 19(2), 773–778 (2015).
    [Crossref]
  5. K. Xiao, Y. Zhu, C. Li, D. Connah, J. M. Yates, and S. Wuerger, “Improved method for skin reflectance reconstruction from camera images,” Opt. Express 24(13), 14934–14950 (2016).
    [Crossref]
  6. V. Heikkinen, T. Jetsu, J. Parkkinen, M. Hauta-Kasari, T. Jaaskelainen, and S. D. Lee, “Regularized learning framework in the estimation of reflectance spectra from camera responses,” J. Opt. Soc. Am. A 24(9), 2673–2683 (2007).
    [Crossref]
  7. H. Shen, H. Wan, and Z. Zhang, “Estimating reflectance from multispectral camera responses based on partial least-squares regression,” J. Electron. Imaging 19(2), 020501 (2010).
    [Crossref]
  8. V. Heikkinen, A. Mirhashemi, and J. Alho, “Link functions and Matérn kernel in the estimation of reflectance spectra from RGB responses,” J. Opt. Soc. Am. A 30(11), 2444–2454 (2013).
    [Crossref]
  9. V. Heikkinen, C. Cámara, T. Hirvonen, and J. Alho, “Spectral imaging using consumer-level devices and kernel-based regression,” J. Opt. Soc. Am. A 33(6), 1095–1110 (2016).
    [Crossref]
  10. T. Eckhard, E. M. Valero, J. Hernández-Andrés, and V. Heikkinen, “Evaluating logarithmic kernel for spectral reflectance estimation—effects on model parametrization, training set size, and number of sensor spectral channels,” J. Opt. Soc. Am. A 31(3), 541–549 (2014).
    [Crossref]
  11. M. M. Amiri and M. D. Fairchild, “A strategy toward spectral and colorimetric color reproduction using ordinary digital cameras,” Color Res. Appl. 43(5), 675–684 (2018).
    [Crossref]
  12. X. Zhang and H. Xu, “Reconstructing spectral reflectance by dividing spectral space and extending the principal components in principal component analysis,” J. Opt. Soc. Am. A 25(2), 371–378 (2008).
    [Crossref]
  13. S. Bianco, “Reflectance spectra recovery from tristimulus values by adaptive estimation with metameric shape correction,” J. Opt. Soc. Am. A 27(8), 1868 (2010).
    [Crossref]
  14. V. Babaei, S. H. Amirshahi, and F. Agahian, “Using weighted pseudo-inverse method for reconstruction of reflectance spectra and analyzing the dataset in terms of normality,” Color Res. Appl. 36(4), 295–305 (2011).
    [Crossref]
  15. W. Zhang, G. Tang, D. Dai, and A. Nehorai, “Estimation of reflectance from camera responses by the regularized local linear model,” Opt. Lett. 36(19), 3933–3935 (2011).
    [Crossref]
  16. J. Liang and X. Wan, “Optimized method for spectral reflectance reconstruction from camera responses,” Opt. Express 25(23), 28273–28287 (2017).
    [Crossref]
  17. J. Liang, K. Xiao, M. R. Pointer, X. Wan, and C. Li, “Spectra estimation from raw camera responses based on adaptive local-weighted linear regression,” Opt. Express 27(4), 5165–5180 (2019).
    [Crossref]
  18. H. A. Khan, J. Thomas, J. Y. Hardeberg, and O. Laligant, “Illuminant estimation in multispectral imaging,” J. Opt. Soc. Am. A 34(7), 1085–1098 (2017).
    [Crossref]
  19. X. Zhang, Q. Wang, J. Li, X. Zhou, Y. Yang, and H. Xu, “Estimating spectral reflectance from camera responses based on CIE XYZ tristimulus values under multi-illuminants,” Color Res. Appl. 42(1), 68–77 (2017).
    [Crossref]
  20. M. M. Amiri and S. H. Amirshahi, “A step by step recovery of spectral data from colorimetric information,” J. Opt. 44(4), 373–383 (2015).
    [Crossref]
  21. M. M. Darrodi, G. Finlayson, T. Goodman, and M. Mackiewicz, “Reference data set for camera spectral sensitivity estimation,” J. Opt. Soc. Am. A 32(3), 381–391 (2015).
    [Crossref]
  22. O. Kohonen, J. Parkkinen, and T. Jääskeläinen, “Databases for spectral color science,” Color Res. Appl. 31(5), 381–390 (2006).
    [Crossref]
  23. F. Wilcoxon, “Individual comparisons by ranking methods,” Biometrics 1(6), 80–83 (1945).
    [Crossref]

2019 (1)

2018 (1)

M. M. Amiri and M. D. Fairchild, “A strategy toward spectral and colorimetric color reproduction using ordinary digital cameras,” Color Res. Appl. 43(5), 675–684 (2018).
[Crossref]

2017 (3)

2016 (2)

2015 (3)

J. H. Song, C. Kim, and Y. Yoo, “Vein visualization using a smart phone with multispectral wiener estimation for point-of-care applications,” IEEE J. Biomed. Health Inform. 19(2), 773–778 (2015).
[Crossref]

M. M. Amiri and S. H. Amirshahi, “A step by step recovery of spectral data from colorimetric information,” J. Opt. 44(4), 373–383 (2015).
[Crossref]

M. M. Darrodi, G. Finlayson, T. Goodman, and M. Mackiewicz, “Reference data set for camera spectral sensitivity estimation,” J. Opt. Soc. Am. A 32(3), 381–391 (2015).
[Crossref]

2014 (1)

2013 (2)

I. Nishidate, T. Maeda, K. Niizeki, and Y. Aizu, “Estimation of melanin and hemoglobin using spectral reflectance images reconstructed from a digital RGB image by the wiener estimation method,” Sensors 13(6), 7902–7915 (2013).
[Crossref]

V. Heikkinen, A. Mirhashemi, and J. Alho, “Link functions and Matérn kernel in the estimation of reflectance spectra from RGB responses,” J. Opt. Soc. Am. A 30(11), 2444–2454 (2013).
[Crossref]

2011 (2)

V. Babaei, S. H. Amirshahi, and F. Agahian, “Using weighted pseudo-inverse method for reconstruction of reflectance spectra and analyzing the dataset in terms of normality,” Color Res. Appl. 36(4), 295–305 (2011).
[Crossref]

W. Zhang, G. Tang, D. Dai, and A. Nehorai, “Estimation of reflectance from camera responses by the regularized local linear model,” Opt. Lett. 36(19), 3933–3935 (2011).
[Crossref]

2010 (2)

S. Bianco, “Reflectance spectra recovery from tristimulus values by adaptive estimation with metameric shape correction,” J. Opt. Soc. Am. A 27(8), 1868 (2010).
[Crossref]

H. Shen, H. Wan, and Z. Zhang, “Estimating reflectance from multispectral camera responses based on partial least-squares regression,” J. Electron. Imaging 19(2), 020501 (2010).
[Crossref]

2008 (3)

A. Pelagotti, A. D. Mastio, A. D. Rosa, and A. Piva, “Multispectral imaging of paintings,” IEEE Signal Process. Mag. 25(4), 27–36 (2008).
[Crossref]

A. Leonardi, S. Buonaccorsi, V. Pellacchia, L. M. Moricca, E. Indrizzi, and G. Fini, “Maxillofacial prosthetic rehabilitation using extraoral implants,” J. Craniofac. Surg. 19(2), 398–405 (2008).
[Crossref]

X. Zhang and H. Xu, “Reconstructing spectral reflectance by dividing spectral space and extending the principal components in principal component analysis,” J. Opt. Soc. Am. A 25(2), 371–378 (2008).
[Crossref]

2007 (1)

2006 (1)

O. Kohonen, J. Parkkinen, and T. Jääskeläinen, “Databases for spectral color science,” Color Res. Appl. 31(5), 381–390 (2006).
[Crossref]

1945 (1)

F. Wilcoxon, “Individual comparisons by ranking methods,” Biometrics 1(6), 80–83 (1945).
[Crossref]

Agahian, F.

V. Babaei, S. H. Amirshahi, and F. Agahian, “Using weighted pseudo-inverse method for reconstruction of reflectance spectra and analyzing the dataset in terms of normality,” Color Res. Appl. 36(4), 295–305 (2011).
[Crossref]

Aizu, Y.

I. Nishidate, T. Maeda, K. Niizeki, and Y. Aizu, “Estimation of melanin and hemoglobin using spectral reflectance images reconstructed from a digital RGB image by the wiener estimation method,” Sensors 13(6), 7902–7915 (2013).
[Crossref]

Alho, J.

Amiri, M. M.

M. M. Amiri and M. D. Fairchild, “A strategy toward spectral and colorimetric color reproduction using ordinary digital cameras,” Color Res. Appl. 43(5), 675–684 (2018).
[Crossref]

M. M. Amiri and S. H. Amirshahi, “A step by step recovery of spectral data from colorimetric information,” J. Opt. 44(4), 373–383 (2015).
[Crossref]

Amirshahi, S. H.

M. M. Amiri and S. H. Amirshahi, “A step by step recovery of spectral data from colorimetric information,” J. Opt. 44(4), 373–383 (2015).
[Crossref]

V. Babaei, S. H. Amirshahi, and F. Agahian, “Using weighted pseudo-inverse method for reconstruction of reflectance spectra and analyzing the dataset in terms of normality,” Color Res. Appl. 36(4), 295–305 (2011).
[Crossref]

Babaei, V.

V. Babaei, S. H. Amirshahi, and F. Agahian, “Using weighted pseudo-inverse method for reconstruction of reflectance spectra and analyzing the dataset in terms of normality,” Color Res. Appl. 36(4), 295–305 (2011).
[Crossref]

Bianco, S.

Buonaccorsi, S.

A. Leonardi, S. Buonaccorsi, V. Pellacchia, L. M. Moricca, E. Indrizzi, and G. Fini, “Maxillofacial prosthetic rehabilitation using extraoral implants,” J. Craniofac. Surg. 19(2), 398–405 (2008).
[Crossref]

Cámara, C.

Connah, D.

Dai, D.

Darrodi, M. M.

Eckhard, T.

Fairchild, M. D.

M. M. Amiri and M. D. Fairchild, “A strategy toward spectral and colorimetric color reproduction using ordinary digital cameras,” Color Res. Appl. 43(5), 675–684 (2018).
[Crossref]

Fini, G.

A. Leonardi, S. Buonaccorsi, V. Pellacchia, L. M. Moricca, E. Indrizzi, and G. Fini, “Maxillofacial prosthetic rehabilitation using extraoral implants,” J. Craniofac. Surg. 19(2), 398–405 (2008).
[Crossref]

Finlayson, G.

Goodman, T.

Hardeberg, J. Y.

Hauta-Kasari, M.

Heikkinen, V.

Hernández-Andrés, J.

Hirvonen, T.

Indrizzi, E.

A. Leonardi, S. Buonaccorsi, V. Pellacchia, L. M. Moricca, E. Indrizzi, and G. Fini, “Maxillofacial prosthetic rehabilitation using extraoral implants,” J. Craniofac. Surg. 19(2), 398–405 (2008).
[Crossref]

Jaaskelainen, T.

Jääskeläinen, T.

O. Kohonen, J. Parkkinen, and T. Jääskeläinen, “Databases for spectral color science,” Color Res. Appl. 31(5), 381–390 (2006).
[Crossref]

Jetsu, T.

Khan, H. A.

Kim, C.

J. H. Song, C. Kim, and Y. Yoo, “Vein visualization using a smart phone with multispectral wiener estimation for point-of-care applications,” IEEE J. Biomed. Health Inform. 19(2), 773–778 (2015).
[Crossref]

Kohonen, O.

O. Kohonen, J. Parkkinen, and T. Jääskeläinen, “Databases for spectral color science,” Color Res. Appl. 31(5), 381–390 (2006).
[Crossref]

Laligant, O.

Lee, S. D.

Leonardi, A.

A. Leonardi, S. Buonaccorsi, V. Pellacchia, L. M. Moricca, E. Indrizzi, and G. Fini, “Maxillofacial prosthetic rehabilitation using extraoral implants,” J. Craniofac. Surg. 19(2), 398–405 (2008).
[Crossref]

Li, C.

Li, J.

X. Zhang, Q. Wang, J. Li, X. Zhou, Y. Yang, and H. Xu, “Estimating spectral reflectance from camera responses based on CIE XYZ tristimulus values under multi-illuminants,” Color Res. Appl. 42(1), 68–77 (2017).
[Crossref]

Liang, J.

Mackiewicz, M.

Maeda, T.

I. Nishidate, T. Maeda, K. Niizeki, and Y. Aizu, “Estimation of melanin and hemoglobin using spectral reflectance images reconstructed from a digital RGB image by the wiener estimation method,” Sensors 13(6), 7902–7915 (2013).
[Crossref]

Mastio, A. D.

A. Pelagotti, A. D. Mastio, A. D. Rosa, and A. Piva, “Multispectral imaging of paintings,” IEEE Signal Process. Mag. 25(4), 27–36 (2008).
[Crossref]

Mirhashemi, A.

Moricca, L. M.

A. Leonardi, S. Buonaccorsi, V. Pellacchia, L. M. Moricca, E. Indrizzi, and G. Fini, “Maxillofacial prosthetic rehabilitation using extraoral implants,” J. Craniofac. Surg. 19(2), 398–405 (2008).
[Crossref]

Nehorai, A.

Niizeki, K.

I. Nishidate, T. Maeda, K. Niizeki, and Y. Aizu, “Estimation of melanin and hemoglobin using spectral reflectance images reconstructed from a digital RGB image by the wiener estimation method,” Sensors 13(6), 7902–7915 (2013).
[Crossref]

Nishidate, I.

I. Nishidate, T. Maeda, K. Niizeki, and Y. Aizu, “Estimation of melanin and hemoglobin using spectral reflectance images reconstructed from a digital RGB image by the wiener estimation method,” Sensors 13(6), 7902–7915 (2013).
[Crossref]

Parkkinen, J.

Pelagotti, A.

A. Pelagotti, A. D. Mastio, A. D. Rosa, and A. Piva, “Multispectral imaging of paintings,” IEEE Signal Process. Mag. 25(4), 27–36 (2008).
[Crossref]

Pellacchia, V.

A. Leonardi, S. Buonaccorsi, V. Pellacchia, L. M. Moricca, E. Indrizzi, and G. Fini, “Maxillofacial prosthetic rehabilitation using extraoral implants,” J. Craniofac. Surg. 19(2), 398–405 (2008).
[Crossref]

Piva, A.

A. Pelagotti, A. D. Mastio, A. D. Rosa, and A. Piva, “Multispectral imaging of paintings,” IEEE Signal Process. Mag. 25(4), 27–36 (2008).
[Crossref]

Pointer, M. R.

Rosa, A. D.

A. Pelagotti, A. D. Mastio, A. D. Rosa, and A. Piva, “Multispectral imaging of paintings,” IEEE Signal Process. Mag. 25(4), 27–36 (2008).
[Crossref]

Shen, H.

H. Shen, H. Wan, and Z. Zhang, “Estimating reflectance from multispectral camera responses based on partial least-squares regression,” J. Electron. Imaging 19(2), 020501 (2010).
[Crossref]

Song, J. H.

J. H. Song, C. Kim, and Y. Yoo, “Vein visualization using a smart phone with multispectral wiener estimation for point-of-care applications,” IEEE J. Biomed. Health Inform. 19(2), 773–778 (2015).
[Crossref]

Tang, G.

Thomas, J.

Valero, E. M.

Wan, H.

H. Shen, H. Wan, and Z. Zhang, “Estimating reflectance from multispectral camera responses based on partial least-squares regression,” J. Electron. Imaging 19(2), 020501 (2010).
[Crossref]

Wan, X.

Wang, Q.

X. Zhang, Q. Wang, J. Li, X. Zhou, Y. Yang, and H. Xu, “Estimating spectral reflectance from camera responses based on CIE XYZ tristimulus values under multi-illuminants,” Color Res. Appl. 42(1), 68–77 (2017).
[Crossref]

Wilcoxon, F.

F. Wilcoxon, “Individual comparisons by ranking methods,” Biometrics 1(6), 80–83 (1945).
[Crossref]

Wuerger, S.

Xiao, K.

Xu, H.

X. Zhang, Q. Wang, J. Li, X. Zhou, Y. Yang, and H. Xu, “Estimating spectral reflectance from camera responses based on CIE XYZ tristimulus values under multi-illuminants,” Color Res. Appl. 42(1), 68–77 (2017).
[Crossref]

X. Zhang and H. Xu, “Reconstructing spectral reflectance by dividing spectral space and extending the principal components in principal component analysis,” J. Opt. Soc. Am. A 25(2), 371–378 (2008).
[Crossref]

Yang, Y.

X. Zhang, Q. Wang, J. Li, X. Zhou, Y. Yang, and H. Xu, “Estimating spectral reflectance from camera responses based on CIE XYZ tristimulus values under multi-illuminants,” Color Res. Appl. 42(1), 68–77 (2017).
[Crossref]

Yates, J. M.

Yoo, Y.

J. H. Song, C. Kim, and Y. Yoo, “Vein visualization using a smart phone with multispectral wiener estimation for point-of-care applications,” IEEE J. Biomed. Health Inform. 19(2), 773–778 (2015).
[Crossref]

Zhang, W.

Zhang, X.

X. Zhang, Q. Wang, J. Li, X. Zhou, Y. Yang, and H. Xu, “Estimating spectral reflectance from camera responses based on CIE XYZ tristimulus values under multi-illuminants,” Color Res. Appl. 42(1), 68–77 (2017).
[Crossref]

X. Zhang and H. Xu, “Reconstructing spectral reflectance by dividing spectral space and extending the principal components in principal component analysis,” J. Opt. Soc. Am. A 25(2), 371–378 (2008).
[Crossref]

Zhang, Z.

H. Shen, H. Wan, and Z. Zhang, “Estimating reflectance from multispectral camera responses based on partial least-squares regression,” J. Electron. Imaging 19(2), 020501 (2010).
[Crossref]

Zhou, X.

X. Zhang, Q. Wang, J. Li, X. Zhou, Y. Yang, and H. Xu, “Estimating spectral reflectance from camera responses based on CIE XYZ tristimulus values under multi-illuminants,” Color Res. Appl. 42(1), 68–77 (2017).
[Crossref]

Zhu, Y.

Biometrics (1)

F. Wilcoxon, “Individual comparisons by ranking methods,” Biometrics 1(6), 80–83 (1945).
[Crossref]

Color Res. Appl. (4)

O. Kohonen, J. Parkkinen, and T. Jääskeläinen, “Databases for spectral color science,” Color Res. Appl. 31(5), 381–390 (2006).
[Crossref]

X. Zhang, Q. Wang, J. Li, X. Zhou, Y. Yang, and H. Xu, “Estimating spectral reflectance from camera responses based on CIE XYZ tristimulus values under multi-illuminants,” Color Res. Appl. 42(1), 68–77 (2017).
[Crossref]

M. M. Amiri and M. D. Fairchild, “A strategy toward spectral and colorimetric color reproduction using ordinary digital cameras,” Color Res. Appl. 43(5), 675–684 (2018).
[Crossref]

V. Babaei, S. H. Amirshahi, and F. Agahian, “Using weighted pseudo-inverse method for reconstruction of reflectance spectra and analyzing the dataset in terms of normality,” Color Res. Appl. 36(4), 295–305 (2011).
[Crossref]

IEEE J. Biomed. Health Inform. (1)

J. H. Song, C. Kim, and Y. Yoo, “Vein visualization using a smart phone with multispectral wiener estimation for point-of-care applications,” IEEE J. Biomed. Health Inform. 19(2), 773–778 (2015).
[Crossref]

IEEE Signal Process. Mag. (1)

A. Pelagotti, A. D. Mastio, A. D. Rosa, and A. Piva, “Multispectral imaging of paintings,” IEEE Signal Process. Mag. 25(4), 27–36 (2008).
[Crossref]

J. Craniofac. Surg. (1)

A. Leonardi, S. Buonaccorsi, V. Pellacchia, L. M. Moricca, E. Indrizzi, and G. Fini, “Maxillofacial prosthetic rehabilitation using extraoral implants,” J. Craniofac. Surg. 19(2), 398–405 (2008).
[Crossref]

J. Electron. Imaging (1)

H. Shen, H. Wan, and Z. Zhang, “Estimating reflectance from multispectral camera responses based on partial least-squares regression,” J. Electron. Imaging 19(2), 020501 (2010).
[Crossref]

J. Opt. (1)

M. M. Amiri and S. H. Amirshahi, “A step by step recovery of spectral data from colorimetric information,” J. Opt. 44(4), 373–383 (2015).
[Crossref]

J. Opt. Soc. Am. A (8)

M. M. Darrodi, G. Finlayson, T. Goodman, and M. Mackiewicz, “Reference data set for camera spectral sensitivity estimation,” J. Opt. Soc. Am. A 32(3), 381–391 (2015).
[Crossref]

V. Heikkinen, A. Mirhashemi, and J. Alho, “Link functions and Matérn kernel in the estimation of reflectance spectra from RGB responses,” J. Opt. Soc. Am. A 30(11), 2444–2454 (2013).
[Crossref]

V. Heikkinen, C. Cámara, T. Hirvonen, and J. Alho, “Spectral imaging using consumer-level devices and kernel-based regression,” J. Opt. Soc. Am. A 33(6), 1095–1110 (2016).
[Crossref]

T. Eckhard, E. M. Valero, J. Hernández-Andrés, and V. Heikkinen, “Evaluating logarithmic kernel for spectral reflectance estimation—effects on model parametrization, training set size, and number of sensor spectral channels,” J. Opt. Soc. Am. A 31(3), 541–549 (2014).
[Crossref]

X. Zhang and H. Xu, “Reconstructing spectral reflectance by dividing spectral space and extending the principal components in principal component analysis,” J. Opt. Soc. Am. A 25(2), 371–378 (2008).
[Crossref]

S. Bianco, “Reflectance spectra recovery from tristimulus values by adaptive estimation with metameric shape correction,” J. Opt. Soc. Am. A 27(8), 1868 (2010).
[Crossref]

V. Heikkinen, T. Jetsu, J. Parkkinen, M. Hauta-Kasari, T. Jaaskelainen, and S. D. Lee, “Regularized learning framework in the estimation of reflectance spectra from camera responses,” J. Opt. Soc. Am. A 24(9), 2673–2683 (2007).
[Crossref]

H. A. Khan, J. Thomas, J. Y. Hardeberg, and O. Laligant, “Illuminant estimation in multispectral imaging,” J. Opt. Soc. Am. A 34(7), 1085–1098 (2017).
[Crossref]

Opt. Express (3)

Opt. Lett. (1)

Sensors (1)

I. Nishidate, T. Maeda, K. Niizeki, and Y. Aizu, “Estimation of melanin and hemoglobin using spectral reflectance images reconstructed from a digital RGB image by the wiener estimation method,” Sensors 13(6), 7902–7915 (2013).
[Crossref]

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. (a) The spectral sensitivity of the camera, and (b) the spectral power distribution of the light source.
Fig. 2.
Fig. 2. Color distribution of the samples in CIE L*a*b* color space: (a) color distribution of CCSG, (b) color distribution of Agfa IT8.7-3, and (c) color distribution of Agfa IT8.7-2.
Fig. 3.
Fig. 3. Boxplots of experiment results based on simulated camera when SNR = ∞: (a) spectral error RMSE, and (b) color difference ΔEab*.
Fig. 4.
Fig. 4. Boxplots of experiment results based on simulated camera when SNR = 50: (a) spectral error RMSE, and (b) color difference ΔEab*.
Fig. 5.
Fig. 5. Boxplots of experiment results based on simulated camera when SNR = 30: (a) spectral error RMSE, and (b) color difference ΔEab*.
Fig. 6.
Fig. 6. Boxplots of practical experiment results in the self test: (a) spectral error RMSE, (b) color difference ΔEab*.
Fig. 7.
Fig. 7. Boxplots of practical experiment results in the cross test: (a) spectral error RMSE, and (b) color difference ΔEab*.

Tables (3)

Tables Icon

Table 1. Comparison of performances of the proposed method and other existing methods for simulated data

Tables Icon

Table 2. Comparison of performances of the proposed method and other existing methods for practical data

Tables Icon

Table 3. Comparison of the whole error distributions of the proposed method and other existing methods

Equations (19)

Equations on this page are rendered with MathJax. Learn more.

u k = ω l ( λ ) r ( λ ) c k ( λ ) d λ + n k , k { R , G , B } .
u = Mr .
r ^ = Qu .
R = QU .
Q = ( R U T ) ( U U T ) 1 .
P = ( H train U train T ) ( U train U train T ) 1 .
x target = P u target .
θ i = arccos ( < x , y > | | x | | | | y | | ) , 0 θ i π 2 , i { 1 , 2 , , N .
w j = 1 θ j + μ , j { 1 , 2 , , p .
W = [ w 1 0 0 0 w 2 0 0 0 0 0 w p ] p × p .
u exp = [ 1 r g b r g r b g b r 2 g 2 b 2 ] T .
Q ada = ( R ~ train U ~ train,exp T ) ( U ~ train, exp U ~ train,exp T ) 1 .
r ^ target = Q ada u target, exp .
RMS E i = 1 n ( r ^ target r i ) T ( r ^ target r i ) , i { 1 , 2 , , N } .
w k = 1 RMS E k + ε , k { 1 , 2 , , q } .
r ^ target = Q ada u target, exp .
SNR = 10 log 10 ( Tr ( M K r M T ) σ 2 ) .
GFC = r ^ T r | | r ^ T r ^ | | | | r T r | |
Δ E ab = ( Δ L ) 2 + ( Δ a ) 2 + ( Δ b ) 2 .

Metrics