## Abstract

A sequential weighted nonlinear regression technique from digital camera responses is proposed for spectral reflectance estimation. The method consists of two stages taking colorimetric and spectral errors between training set and target set into accounts successively. Based on polynomial expansion model, local optimal training samples are adaptively employed to recover spectral reflectance as accurately as possible. The performance of the method is compared with several existing methods in the cases of simulated camera responses under three kinds of noise levels and practical camera responses under the self as well as cross test conditions. Results show that the proposed method is able to recover spectral reflectance with a higher accuracy than other methods considered.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

In the past few years, there has been a considerable interest to study multispectral imaging to reconstruct the spectral information of samples. On the one hand, the reflectance spectra defined as the ‘fingerprint’ of object express color more accurately than tricolor information and can avoid metameric issues, so they have been used for color measurement and color quality control in paint, plastic, inks and textile industries. Besides, objects analysis and visualization, such as cultural heritage [1] and medical diagnosis [2–4], also employ reflectance spectra. Specifically, as a noninvasive method, multispectral imaging provides reflectance spectra of each substance applied to identify pigment [1]; the spectral representation of skin color is important for the diagnosis of cutaneous diseases [2]; the vein visualization technology based on multispectral estimation can even be used for various point-of-care applications, such as needle insertion for obese patients, children and elderly people instead of costly and time-consuming ultrasound imaging [4]. Since spectral data are essential for many applications, studying a methodology recovering reflectance spectra accurately seems to be meaningful.

There are a variety of spectral characterization techniques, such as wiener estimation, pseudo-inverse estimation, finite-dimensional modeling, Matrix R method, principal component analysis (PCA), independent component analysis (ICA), kernel methods, other linear models and nonlinear models (polynomial, tetrahedral models and neural networks). Among these methods, wiener estimation and finite-dimensional modeling need to instrumentally measure or estimate camera sensitivities. To our knowledge, spectral sensitivities of camera are inconvenient to be obtained directly through professional instruments, while estimating mathematically by indirect method not only increases the complexity of the algorithm, but also causes the secondary propagation of the errors. Compared to the two methods, the remaining methods recovering spectral reflectance without any a priori knowledge about imaging system are more practical and more widely concerned. However, classical methods on the basis of pseudo-inverse estimation, matrix R, principal component analysis, and independent component analysis are simple and straight but not very accurate. Then, various modifications of these traditional techniques have been proposed. Xiao et al. [5] obtained basis functions by principal component analysis, and built the polynomial model to map RGB to reflectance basis weights to predict the reflectance. Heikkinen [6] and Shen [7] proposed a general regularization framework and a partial least-squares regression method, respectively, for robust reflectance estimation. Heikkinen [8,9] and Eckhard [10] utilized different kernel-based regression models for reflectance estimation with relatively high accuracy. Amiri [11] found that spectral and colorimetric error of the recovery could be reduced via weighted nonlinear regression. All of aforementioned methods considered are global methods. Recently, many methods have been improved by concentrating on local solutions. Zhang et al. [12] divided the spectral reflectance space into 11 subgroups, and the extended principal components of the corresponding subgroup samples were used to reconstruct the spectral reflectance. Bianco [13] chose the metamer with the most similar shape to the reflectance available to recover the reflectance spectra with the desired tristimulus values. Babaei [14] employed weighting matrices to improve the pseudo-inverse estimation for reconstruction of reflectance. Zhang [15] approximated the reflectance of the testing sample by the linear combination of k reflectances from the training set that have similar camera responses to the testing sample. Liang [16,17] proposed a local-weighted non-linear regression model based on camera responses and a local-weighted linear regression model based on raw camera responses to estimate spectral reflectance. These methods with weighted regression are most promising in terms of accuracy, but they always select and weight training samples only by color differences between testing samples and training samples and ignore their spectra differences. In the viewing of this, to further diminish colorimetric and spectral errors between the estimated and actual spectra, it is necessary to develop an optimized method for adaptively selecting and weighting training samples according to the colorimetric and spectral characteristics.

In this work, we propose a sequential weighted nonlinear regression technique for spectral reflectance from digital raw camera responses. The method consists of two stages to enhance the performance of reflectance estimation as much as possible. Colorimetric vector angles and spectral errors are utilized to select and weight the training samples in succession. The performances of the proposed method and several existing methods are compared via both simulated camera system under different noise levels and practical camera system under the self as well as cross test conditions. The experimental results show the superiority of our method over others in terms of colorimetric and spectral accuracy.

## 2. Spectral imaging model

In the human visual system, an image is formed by light focused onto the retina with cones which are sensitive mainly to light containing long, middle and short wavelengths, respectively [18]. With the similar basic principle, the color filters of a camera work like cones. The responses of a camera with three channels are dependent on the spectral power distribution of light source $l(\lambda )$, surface reflectance $r(\lambda )$, camera sensitivity functions ${c_k}(\lambda )$ and system noise ${n_k}$. The image value ${u_k}$ can be written to be a simple imaging model, as Eq. (1)

**${\textbf M}$**represents the spectral responsivity including the spectral power distribution of light source and the camera spectral sensitivity, and

**${\textbf r}$**denotes the spectral reflectance.

## 3. Proposed method

Some basic nomenclature is first introduced before the proposed method is described in detail. The reconstruction of spectral reflectance is to estimate the reflectance $\hat{{\textbf r}}$ by the following model:

where ${\textbf u}$ is the camera response vector as Eq. (2); ${\textbf Q}$ is the transformation matrix. Plainly, once ${\textbf Q}$ is known, the estimation of reflectance becomes very easy. To obtain ${\textbf Q}$, training samples are adopted to conduct a learning procedure. Reflectance and the camera’s response values of training samples have the same relationship as Eq. (3), provided in Eq. (4): where ${\textbf R}$ denotes the spectral reflectance matrix of training samples, and ${\textbf U}$ denotes the camera responses matrix of training samples. By minimizing the least-square error between the actual reflectance and the estimated reflectance of training samples, the transformation matrix can be solved through pseudo-inverse and wiener estimation technique as Eq. (5). where the superscript ‘T’ indicates the matrix transpose, and the superscript ‘-1’ represents the matrix inverse.#### 3.1 Acquiring raw camera responses

Since images in raw formats contain the unprocessed sensor data that truly correspond to the observational model in Eq. (1), raw images are used to estimate spectral reflectance. After capturing raw images, firstly, it is necessary to convert them into easily read TIFF files via the open-source software dcraw or the converting software designed by the camera’s manufacturer. Then, three-channel RGB raw images can be generated by a de-mosaicing algorithm, for example, MATLAB’s built-in de-mosaicing function.

#### 3.2 Weighting training samples based on colorimetric vector angle

### 3.2.1 Colorimetric transformation

RGB signals generated by a camera are device-dependent and non-uniform. Namely, the RGB values are not suitable for color evaluation. It is necessary to transform raw RGB values into CIE XYZ values. Therefore, training samples are first used to derive a transformation matrix to transform device-dependent camera RGB color space into device-independent CIE XYZ color space. Based on the least squares method, a 3×3 transform matrix ${\textbf P}$ can be determined by solving Eq. (6)

**x**

_{target}denotes the CIE XYZ values vector of the target sample;

**u**

_{target}denotes the normalized raw responses vector of the target sample.

### 3.2.2 Calculating weighting matrix

Training samples similar to target samples play a more important role in reflectance estimation. In order to improve estimation accuracy, selecting training samples and assigning appropriate weights by color differences are proved to be effective approaches [11,16,17]. However, in some cases, with bigger color differences, the spectral shapes of some training samples and the target sample are still very similar. Therefore, colorimetric vector angle seems to be a more practical and promising parameter to select and weight training samples. The parameter treats CIE XYZ values of training samples and target samples as vectors. The two vectors ${\textbf x}$ and ${\textbf y}$ are used to store CIE XYZ values of the target sample and the training sample, respectively. The colorimetric vector angle of the *i*th training sample and the target sample is calculated by Eq. (8)

*N*denotes the number of training samples. The colorimetric vector angle represents the difference of the two samples by the vector direction rather than the vector length. It means the smaller the angle is, the higher the similarity of the two samples is. Therefore, the training samples are sorted in ascending order according to their colorimetric vector angles with the target sample. The first

*p*(1 ≤

*p*≤

*N*) training samples are selected as the local optimal training samples. Since there is no pre-determined optimal value for

*p*, so it must be determined by validation experiment. Then, the weighting coefficient

*w*is defined for each selected local optimal training sample by Eq. (9) where the subscript

_{j}*j*refers to the

*j*th sample of local optimal training samples;

*θ*denotes the colorimetric vector angle between the

_{j}*j*th local optimal training sample and the target sample;

*μ*is a very small amount in case

*θ*is equal to 0. In this work,

_{j}*μ*= 0.0001. Weighting coefficients of all training samples are sorted in descending order in a diagonal matrix

**W**to form the weighting matrix as follows:

#### 3.3 Responses expansion

The estimation accuracy increases with the number of terms of the polynomial model [19]. Therefore, terms *rg, rb, gb, r ^{2}, g^{2}, b^{2}, rgb, r^{2}g, g^{2}b, rb^{2}, r^{2}b, rg^{2}, gb^{2}*……are tried to be added to the normalized raw camera responses vector ${\textbf u}$ and matrix ${\textbf U}$. Together with a constant term 1, high-order polynomial terms are adopted to transform linear regression into nonlinear one. However, after the number of terms reaches a certain value, worse result or no significant improvement can be observed [11,16]. Amiri [11] and Liang [16] used the nonlinear polynomial regression with 17 terms, 18 terms and 20 terms, respectively, for spectral reflectance estimation; in this work, it was found that the polynomial with 10 terms gave more accurate results. Therefore, in this work, the normalized raw responses are expanded as Eq. (11)

**u**

_{exp}is the 10×1 vector of the expanded normalized raw camera responses;

*r*,

*g*and

*b*denote the normalized raw camera responses for R-, G- and B-channel of a pixel; and the superscript ‘T’ indicates the matrix transpose.

#### 3.4 Estimating spectral reflectance of the target sample

Since all normalized raw camera responses are expanded, an adaptive transformation matrix ${{\textbf Q}_{\textrm{ada}}}$ can be constructed as Eq. (12)

#### 3.5 Weighting training samples based on spectral error

The training sample with reflectance **r*** _{i}* can be weighted again according to its spectral similarity to the reflectance of the target sample derived from the previous step. In order to obtain the spectral similarity, the root-mean-square error (RMSE) between the previous reconstructed reflectance of the target sample and the

*i*th training sample is calculated by Eq. (14)

*n*represents the number of sampling points in the visible spectrum from 400 nm to 700 nm. In this work,

*n*= 31. The training samples are sorted in ascending order according to their root-mean-square errors with the target sample. The first

*q*(1 ≤

*q*≤

*N*) training samples are selected as the local optimal training samples (

*q*must be determined by validation experiment). Because spectral similarity is inversely proportional to RMSE, the weighting coefficient

*w*for each selected local optimal training sample can be calculated by Eq. (15) [20] where the subscript

_{k}*k*refers to the

*k*th sample of local optimal training samples; RMSE

_{k}denotes the spectral difference between the

*k*th local optimal training sample and the target sample;

*ε*is a very small amount in case $\textrm{RMS}{\textrm{E}_k}$ is equal to 0. In this work,

*ε*= 0.0001. The weighting coefficients of the

*q*training samples are placed in a diagonal matrix to generate a new weighting matrix ${{\textbf W}^{\prime}}$.

#### 3.6 Re-estimating spectral reflectance of the target sample

${\tilde{{\textbf R}}_{\textrm{train}}}$ and ${\tilde{{\textbf U}}_{\textrm{train, exp}}}$ in Eq. (12) are updated after new ${{\textbf R}_{\textrm{train}}}$ and ${{\textbf U}_{\textrm{train,exp}}}$ are multiplied by ${{\textbf W}^{\prime}}$. A new adaptive transformation matrix ${{\textbf Q}_{\textrm{ada}}}^{\prime}$ is calculated out via Eq. (12) using new ${\tilde{{\textbf R}}_{\textrm{train}}}$ and ${\tilde{{\textbf U}}_{\textrm{train}}}$.

Finally, the spectral reflectance of the target sample can be re-estimated by Eq. (16)

## 4. Experiment

To demonstrate the performance of our method, experiments were carried out based on both simulated and real data.

#### 4.1 Simulated experiments

In the simulated experiments, RGB values of 1269 Munsell matte color chips were simulated by using Nikon D5100 camera and CIE standard illuminant D65 based on the mathematical model as Eq. (1). Figure 1(a) and Fig. 1(b) show the spectral sensitivity of the camera and the spectral power distribution of the light source, ranging from 400 to 700 nm [21]. Reflectance data have been measured by Perkin-Elmer lambda 9 UV/VIS/NIR spectrophotometer in the range of 380 to 800 nm with 1 nm sampling by Hiltunen et al. [22]. Corresponding reflectance ranging from 400 to 700 nm was extracted at 10-nm intervals.

Additive normally distributed noise with different levels was added to the three camera channels to simulate real imaging system. The signal-to-noise ratio (SNR) can be calculated by Eq. (17)

**${\textbf M}$**represents spectral responsivity matrix as Eq. (2), the autocorrelation matrix ${{\textbf K}_r}$ can be calculated by all Munsell Matte color chips, and ${\sigma ^2}$ denotes the variance of Gaussian noise with zero mean. The spectral estimation performances under three noise levels: ∞, 50 and 30 were studied. ∞ denotes that no noise is added to the ideal camera responses. The noise variance ${\sigma ^2}$ of other two SNRs can be calculated by transforming Eq. (17).

1269 Munsell matte color chips were randomly divided into three groups: training set, validation set, and testing set. Training set size was chosen to be 635 samples, and both validation and testing set sizes were 317 samples. The training-estimating cycle was repeated 50 times with randomly partitioned samples. For each trial, transformation matrix was calculated by using data from training set. Since the number of local optimal training samples had an influence on the transforming results during two stages of spectral estimation, validation set was employed to determine the two numbers of local optimal training samples via minimizing mean spectral error. Then, testing set was employed to evaluate the properties of the proposed method.

#### 4.2 Practical experiments

In the practical experiments, the actual raw responses of the imaging system were used. Three color charts were adopted to verify the proposed method, including X-Rite ColorChecker SG (CCSG, 140 samples), IT8.7-3 color chart (952 samples, printed on Fantanc UH180A digital printing paper by the printer Canon iPF8410) and Agfa IT8.7-2 color chart (288 samples). Figure 2 presents the color specifications of samples in the CIE L*a*b^{*} color space under D65 illuminant and 1964 standard observer. It can be observed that samples are evenly distributed in color space.

The real RGB camera Canon EOS 80D and the spectrophotometer X-Rite CI64 with a software Color iControl were the measurement devices. With fixed focal length of 50 mm, the camera’s f-number was set to F5.6, ISO was set to 200, and the exposure time was 1/10 s. In the room lit by D65 fluorescent lamps, after setting white balance manually via standard gray board, the three color charts’ photographs were taken. The raw ‘CR2’ format files that the camera output were converted to ‘tiff’ format files via the software dcraw. The ‘tiff’ format files were de-mosaiced using MATLAB’s built-in de-mosaicing algorithm. Then, spectral data of the three color charts was measured with the spectrophotometer and sampled from 400 to 700 nm in 10-nm increments. The raw camera response values of the areas about 40×40 pixels corresponding to positions measured by the spectrophotometer with 4 mm aperture were averaged.

The practical experiments were performed in the self as well as cross test conditions. In the self test, CCSG color chips were used as the color targets, with half for training, half of the rest for validating and others for testing, and 50 random trials were conducted for partitioning samples. In the cross test, the whole IT8.7-3 and Agfa IT8.7-2 color chart were used as the training set and validation set, respectively, and the testing set was the same as that in the self test.

#### 4.3 Evaluation procedure

The spectral estimation accuracy of the target set was evaluated by various metrics. Magnitude and shape differences between the estimated and the measured spectra were defined by RMSE as Eq. (14). And goodness fitting coefficient (GFC) as Eq. (18) was used for a complementary metric to evaluate spectral differences.

Finally, assuming CIE 1964 10°standard observer and CIE D65 standard illuminant, CIE L*a*b* coordinates of reflectance spectra were calculated. Further, the CIE 1976 L*a*b* color differences were calculated by Eq. (19) to define perceptual differences between the estimated and the measured spectra:

Additionally, the mean value of the RMSE measure for validation set was used in search for the number of local optimal training samples.

## 5. Results and discussion

We have compared the spectral and colorimetric estimation results of our method and several existing methods. These compared methods included regularized least squares (RLS) method [6], regularized local linear model (RLLM) [15], logarithmic kernel method [10], PCA method proposed by Xiao [5], weighted nonlinear regression (WNR) method used by Amiri [11], and local-weighted linear regression (LLR) method proposed by Liang [17]. Especially, for RLS method, RLLM method, logarithmic kernel method, LLR method and the proposed method, all the estimation errors were calculated with the optimal parameters found by minimizing the estimation error on the validation set.

For the simulated data of the Munsell matte color chips, spectral errors and color differences of estimated spectra were evaluated in terms of the mean and maximum (Max) errors. The results are summarized in Table 1 over 50 trials. The best result of each column is reported in bold font. As this table shows, the proposed method leads to a certain reduction of the mean and the maximum values of RMSE and ΔE_{ab}*, and a slight increase of the mean values of GFC between the actual and the estimated spectra under different noise conditions. Particularly, for all methods, estimation quality under lower noise level is better than that under higher noise level; LLR method that also uses local-weighted regression achieves the closest result to the proposed method under the lower noise levels, but the performance of LLR method is even slightly worse than some other existing methods under the higher noise level. As a whole, the results with different noise levels have shown that our method can help to improve the accuracy and stability of reflectance estimation.

Figure 3, Fig. 4 and Fig. 5 graphically show the boxplot distributions of spectral errors and color differences of the proposed method compared with other estimation methods under different noise conditions. In the boxplot, the height of the blue rectangle defines interquartile range (IQR). The bottom line, the red line and the top line of the blue rectangular box indicate the first quartile number (25^{th} percentile), the median number (50^{th} percentile) and the third quartile number (75^{th} percentile) of error dataset, respectively. The bottom black line and the top black line indicate the ‘minimum’ number (the first quartile number-1.5*IQR, not the smallest) and the ‘maximum’ number (the third quartile number+1.5*IQR, not the highest), respectively. The red ‘+’ symbols display surprisingly high maximums called outliers. It can be seen from the following three groups of boxplots: (1) error data from the proposed method is more tightly grouped than those from other methods, which means that the error distribution for different samples of the proposed method is relatively concentrated; (2) whether the noise level is low or high, as a whole, both spectral errors and color differences of the proposed method are minimum; (3) the lower the noise level is, the more obvious the superiority of the proposed method is. Therefore, controlling the noise of camera is beneficial to the better use of this method.

The results for the practical data with different training and validation sets to estimate CCSG are summarized in Table 2 over 50 trials. The best result of each column is reported in bold font. As can be seen from Table 2, in general, the proposed method generates the smallest RMSE and ΔE_{ab}*, whereas the largest corresponding GFC no matter whether the training, validation and testing samples are from the same color chart or not. This result is consistent with that in the simulated data experiments. However, the inconformity in the best result between spectral accuracy and colorimetric accuracy can be found. Specifically, in the self test, though the maximum spectral error of the proposed method is lowest, the maximum color difference of the kernel method is slightly lower than that of the proposed method. It indicates that since different evaluative metrics have different emphases, ΔE_{ab}* measure is not completely consistent with RMSE measure, which has been noted in [6,8]. Besides, though the number of CCSG color chips is significantly fewer than IT8.7/3 color chips, it is noted that all the methods perform better for CCSG training set than for IT8.7/3 training set. This finding shows that the estimated spectral accuracies of all the methods are also affected by the medium [5,19].

Figure 6 and Fig. 7 illustrate the boxplot distributions of spectral errors and color differences of the proposed method compared with other estimation methods based on the practical data. It intuitively illustrates that spectral errors and color differences of the proposed method are minimum. Therefore, the practical experiments have the similar results to the simulated experiments. Meanwhile, it is worth mentioning that the superiority of the proposed method over other methods is more obvious in the practical experiments than in the simulated experiments.

To evaluate whether the superiority of the proposed method was statistically significant, a nonparametric test called the Wilcoxon sign test (WST) without requiring the error data to follow particular probability distributions was used to compare the whole error distributions of the proposed method with another method [23]. All the pairwise comparisons between the proposed method and benchmarking methods on the simulated data and the practical data among RMSE, ΔE_{ab}* distributions were made at a significance level p = 0.05. Table 3 shows that the whole error distribution differences for which pairwise methods are statistically significant (bold). It can be noticed that under all the imaging conditions, for RMSE, the estimation accuracy of the proposed methods is always significantly better than any benchmarking method; for ΔE_{ab}*, the estimation accuracy of the proposed method is also significantly better than most of benchmarking methods.

## 6. Conclusion

This work proposed a sequential adaptive estimation method for spectral reflectance based on camera responses by adaptively selecting and weighting training samples for two times. Through the simulated and practical experiments, it was found that the proposed method can accurately obtain spectral reflectance, particularly when the medium of the training and validation set was the same as that of the testing set. The performance of the proposed estimation method was compared with the existing methods for simulated camera under different SNR levels and for real camera. Results indicated that the proposed method performed best with minimum spectral errors and color differences. Hence the method of sequential adaptive estimation is effective in estimating spectral reflectance.

## Funding

National Natural Science Foundation of China (61575090, 61775169).

## Disclosures

The authors declare that there are no conflicts of interest related to this article.

## References

**1. **A. Pelagotti, A. D. Mastio, A. D. Rosa, and A. Piva, “Multispectral imaging of paintings,” IEEE Signal Process. Mag. **25**(4), 27–36 (2008). [CrossRef]

**2. **A. Leonardi, S. Buonaccorsi, V. Pellacchia, L. M. Moricca, E. Indrizzi, and G. Fini, “Maxillofacial prosthetic rehabilitation using extraoral implants,” J. Craniofac. Surg. **19**(2), 398–405 (2008). [CrossRef]

**3. **I. Nishidate, T. Maeda, K. Niizeki, and Y. Aizu, “Estimation of melanin and hemoglobin using spectral reflectance images reconstructed from a digital RGB image by the wiener estimation method,” Sensors **13**(6), 7902–7915 (2013). [CrossRef]

**4. **J. H. Song, C. Kim, and Y. Yoo, “Vein visualization using a smart phone with multispectral wiener estimation for point-of-care applications,” IEEE J. Biomed. Health Inform. **19**(2), 773–778 (2015). [CrossRef]

**5. **K. Xiao, Y. Zhu, C. Li, D. Connah, J. M. Yates, and S. Wuerger, “Improved method for skin reflectance reconstruction from camera images,” Opt. Express **24**(13), 14934–14950 (2016). [CrossRef]

**6. **V. Heikkinen, T. Jetsu, J. Parkkinen, M. Hauta-Kasari, T. Jaaskelainen, and S. D. Lee, “Regularized learning framework in the estimation of reflectance spectra from camera responses,” J. Opt. Soc. Am. A **24**(9), 2673–2683 (2007). [CrossRef]

**7. **H. Shen, H. Wan, and Z. Zhang, “Estimating reflectance from multispectral camera responses based on partial least-squares regression,” J. Electron. Imaging **19**(2), 020501 (2010). [CrossRef]

**8. **V. Heikkinen, A. Mirhashemi, and J. Alho, “Link functions and Matérn kernel in the estimation of reflectance spectra from RGB responses,” J. Opt. Soc. Am. A **30**(11), 2444–2454 (2013). [CrossRef]

**9. **V. Heikkinen, C. Cámara, T. Hirvonen, and J. Alho, “Spectral imaging using consumer-level devices and kernel-based regression,” J. Opt. Soc. Am. A **33**(6), 1095–1110 (2016). [CrossRef]

**10. **T. Eckhard, E. M. Valero, J. Hernández-Andrés, and V. Heikkinen, “Evaluating logarithmic kernel for spectral reflectance estimation—effects on model parametrization, training set size, and number of sensor spectral channels,” J. Opt. Soc. Am. A **31**(3), 541–549 (2014). [CrossRef]

**11. **M. M. Amiri and M. D. Fairchild, “A strategy toward spectral and colorimetric color reproduction using ordinary digital cameras,” Color Res. Appl. **43**(5), 675–684 (2018). [CrossRef]

**12. **X. Zhang and H. Xu, “Reconstructing spectral reflectance by dividing spectral space and extending the principal components in principal component analysis,” J. Opt. Soc. Am. A **25**(2), 371–378 (2008). [CrossRef]

**13. **S. Bianco, “Reflectance spectra recovery from tristimulus values by adaptive estimation with metameric shape correction,” J. Opt. Soc. Am. A **27**(8), 1868 (2010). [CrossRef]

**14. **V. Babaei, S. H. Amirshahi, and F. Agahian, “Using weighted pseudo-inverse method for reconstruction of reflectance spectra and analyzing the dataset in terms of normality,” Color Res. Appl. **36**(4), 295–305 (2011). [CrossRef]

**15. **W. Zhang, G. Tang, D. Dai, and A. Nehorai, “Estimation of reflectance from camera responses by the regularized local linear model,” Opt. Lett. **36**(19), 3933–3935 (2011). [CrossRef]

**16. **J. Liang and X. Wan, “Optimized method for spectral reflectance reconstruction from camera responses,” Opt. Express **25**(23), 28273–28287 (2017). [CrossRef]

**17. **J. Liang, K. Xiao, M. R. Pointer, X. Wan, and C. Li, “Spectra estimation from raw camera responses based on adaptive local-weighted linear regression,” Opt. Express **27**(4), 5165–5180 (2019). [CrossRef]

**18. **H. A. Khan, J. Thomas, J. Y. Hardeberg, and O. Laligant, “Illuminant estimation in multispectral imaging,” J. Opt. Soc. Am. A **34**(7), 1085–1098 (2017). [CrossRef]

**19. **X. Zhang, Q. Wang, J. Li, X. Zhou, Y. Yang, and H. Xu, “Estimating spectral reflectance from camera responses based on CIE XYZ tristimulus values under multi-illuminants,” Color Res. Appl. **42**(1), 68–77 (2017). [CrossRef]

**20. **M. M. Amiri and S. H. Amirshahi, “A step by step recovery of spectral data from colorimetric information,” J. Opt. **44**(4), 373–383 (2015). [CrossRef]

**21. **M. M. Darrodi, G. Finlayson, T. Goodman, and M. Mackiewicz, “Reference data set for camera spectral sensitivity estimation,” J. Opt. Soc. Am. A **32**(3), 381–391 (2015). [CrossRef]

**22. **O. Kohonen, J. Parkkinen, and T. Jääskeläinen, “Databases for spectral color science,” Color Res. Appl. **31**(5), 381–390 (2006). [CrossRef]

**23. **F. Wilcoxon, “Individual comparisons by ranking methods,” Biometrics **1**(6), 80–83 (1945). [CrossRef]