Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Spectral missing color correction based on an adaptive parameter fitting model

Open Access Open Access

Abstract

With the development of remote sensing technology, true-color visualization of hyperspectral LiDAR echo signals has become a hotspot for both academic research and commercial applications. The limitation of the emission power of hyperspectral LiDAR causes the loss of spectral-reflectance information in some channels of the hyperspectral LiDAR echo signal. The color reconstructed based on the hyperspectral LiDAR echo signal is bound to have serious color cast problem. To solve the existing problem, a spectral missing color correction approach based on adaptive parameter fitting model is proposed in this study. Given the known missing spectral-reflectance band intervals, the colors in incomplete spectral integration are corrected to accurately restore target colors. Based on the experimental results, the color difference between color blocks and the hyperspectral image corrected by the proposed color correction model is smaller than that of the ground truth, and the image quality is higher, realizing the accurate reproduction of the target color.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Recent years have witnessed a spurt of progress in digital city and digital information of ground objects. The collection, processing, cognition and storage of various optical and geometric characteristics of surface materials, especially spectrum and color appearance, have become vital study directions in the optical field [14]. There exists a high demand for high-precision surface color reproduction, theoretical significance and application value in various fields, including ground observation, modeling of airborne and on-orbit remote sensing equipment, color three-dimensional restoration of geomorphic surface, resource exploration, color three-dimensional digital model, national defense, cultural relics protection as well as military science and technology [58].

Spectral reflectance is the fingerprint of the color information of object surface, which can characterize the ratio information of radiant exitance to radiant incidence in each wavelength band [915]. According to colorimetry, the color of object surface can be reproduced by combining the illuminants information in the environment, the spectrum information of object surface in the visible range and the standard colorimetric observer matching functions [16,17]. The color reproduction process is displayed in Fig. 1.

 figure: Fig. 1.

Fig. 1. Schematic diagram of color reproduction of object surface. (D65/10°)

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. Comparison of color cast of missing channels. (D65/10°)

Download Full Size | PDF

However, when complete spectral reflectance of the target surface in the visible range is obtained by hyperspectral lidar, some spectral bands are missing, or some channel data are unavailable [1821]. The main reasons for these problems are presented as follows. First, due to the scattering of the laser by the atmosphere, the light intensity of the blue laser band transmitted to the hyperspectral sensor is insufficient, causing the missing or unavailable spectral information of this band. Second, due to the limitation of transmission power of hardware, the laser echo energy in low-frequency band (380-450 nm) is weak. Therefore, the spectrum cannot be obtained, or the obtained spectrum information is unavailable. Third, some spectral channels of the equipment cannot be collected or the collected data is inaccurate due to hardware aging, damage or wrong operation of the spectral reflectance acquisition equipment. The surface color reproduced by incomplete spectral information in the visible range will inevitably lead to serious color cast, as shown in Fig. 2.

Figure 2(b)-(d) are obtained by color integration of the spectral reflectance of different missing parts. Through comparison, it is found that the missing information accounts for 13%-22% of all information, while the color cast of the target color is serious.

To further confirm the severity of the problem, with the missing step of 30 nm, from 400 nm, the DE2000 color difference equation is used to calculate the color difference between the complete spectrum in the visible light range and the color of the partially missing spectrum (D65/10°). As shown in Fig. 3, Fig. 3(a) shows the color difference map of the Munsell 1269 color card after missing some channels, and Fig. 3(b) presents 10 images of the ICVL [22] hyperspectral data set. The color difference of 1000 pixels is randomly sampled from each image. As shown in Fig. 3, although the missing channel occupies 9.677% of the data, the DE2000 color difference is up to 28, which has seriously influenced the human perception and judgment of color.

 figure: Fig. 3.

Fig. 3. Color difference caused by partial spectral band missing.

Download Full Size | PDF

For color cast of partial spectrum missing in the visible light range, Binhui Wang et al. and Bowen Chen et al. explored the wavelength combination of 466 nm, 546 nm and 626 nm. In the absence of missing spectra, the component information of RGB three channels can be calculated by quantization and normalization, which is expected to weaken the effect of missing spectrum on the reproduced colors [23,24]. This is a compromise scheme with the combination of wavelength information for the red, green and blue primary colors as specified by CIE (International Commission on Illumination) and the spectral channels with higher signal-to-noise ratio. To a certain extent, the color of the target surface can be reproduced. However, since the tristimulus values are not obtained by color integral function, the reproduced color is quite different from the real feeling of human eyes. Meanwhile, the information of illuminants and standard observer is not supplemented, causing the color difference between the reproduced color and the real color. Based on the missing spectrum reconstruction algorithm for improved spectrum reconstruction by gradient boosting decision tree series forecasting tree sequence prediction, Wang Tengfeng et al. calculated the parameters and weights of the prediction model using supervised learning [25]. The missing spectrum is predicted according to the spectrum segments with high signal-to-noise ratio. The proposed method can effectively predict the missing spectrum information and makes a good effect on color reproduction. However, as a learning-based method, it needs a large number of prior samples. Moreover, the method needs to model each data separately, and many parameters and weights are calculated, causing poor real-time performance. Since this method reconstructs color based on series forecasting, it cannot accurately reconstruct the missing spectrum in the middle part. Since this method performs color reconstruction based on sequence prediction, the missing spectrum in the middle part cannot be accurately reconstructed, resulting in the inability to accurately reproduce the target color.

In addition to the above-mentioned methods, models can also be adopted for correcting the color cast caused by spectrum missing. The study of color correction can be divided into the following four categories [26], which are methods based on physics, statistics, learning and cast correction. Using the dichromatic reflection model, the physics-based method estimates the color of the reflective object through high-light reflection. Sung-Min Woo et al. proposed to estimate the illuminants using the high-light pixels formed in the inverse strength red space [27], and then correct the surface color information of the reflective object. According to Sung-Min Woo et al, the high-light pixels formed in the inverse strength red space are used to estimate the illuminants, and then correct the surface color information of reflective objects. The method based on statistics estimates the illuminants information using the statistical data in the image, aiming to realize color correction of the target. YanLin Qian et al. proposed a novel grayness index model based on the dichromatic reflection model [28], and estimated the information of ambient illuminants using the possibility from the gray plane in the gray pixel coefficient diagram. Zichen Zhao et al. proposed the Chromaticity-Line Prior condition to constrain the solution space [29], and determined the information of ambient illuminants based on the pixel information of the main color in the image to correct color. The color correction method based on learning is characterized by model training with prior data, and the estimated illuminants characteristics are obtained from the training data. Firas Laakom et al. proposed to employ convolutional neural network to learn the histogram features of images [30], and introduced attentional mechanisms to dynamically learn the characteristic illuminants. Finally, the color information in the environment was corrected by multi-dimensional illuminants feature regression. By combining the bicolor reflection model and the deep model, Marco Buzzelli et al. used deep neural network to estimate several representative bicolor reflection planes with different confidence from the image for color correction [31]. Using the framework of comparative learning, Yi-Chen Lo et al. constructed positive and negative pairs with different difficulty levels using the tag information in the data set [32], and trained the extractor of illuminants features to estimate the image illuminants. Mahmoud Afifi et al. calibrated a deep network to undetermined camera parameters using log-chromaticity histograms of many unmarked images [33]. Moreover, the filter and bias used for positioning in logarithmic space was generated by the proposed network to realize image color correction. As the cast correction method can improve the accuracy of the initial correction based on statistical methods, Mahmoud Afifi et al. developed an as-projective-as-possible conversion method [34], which calculated the correction matrix, assigned different weights to different training samples, and calculated the local adaptive correction matrix of the initial correction method to realize color correction. However, none of the above-mentioned color correction models can be used to solve the color cast problem resulted from partial spectral missing. First, some methods need to find the basic white point to estimate the color cast parameters for color correction. Nevertheless, for color cast caused by spectrum missing, there is no standard white point or standard gray point. Second, for color cast caused by spectrum missing, the illuminants information is fixed, and the color cast rule is not caused by the change of illuminants. Therefore, the color cast correction algorithm resulted from the change of ambient illuminants is not applicable. Finally, based on the experiments results, it can be concluded that there is still difference between the color deviation caused by spectrum missing and the color corrected by the above algorithms.

For serious color cast of hyperspectral images caused by partial spectral missing in the visible range, this study proposed a color correction algorithm of adaptive parameter fitting model to correct color cast. At first, the tristimulus values of partial missing spectrum and complete spectrum were calculated respectively, and the tristimulus error curve was obtained by subtracting two groups of tristimulus values. Second, the function fitting method was applied to fit the error curve of the tristimulus values, which served as the color cast model. Meanwhile, the color correction model was deduced by combining the color cast model with the entropy-based adaptive parameter h. Finally, the accuracy of the color cast model and the color correction model was confirmed on color cards and hyperspectral images, respectively.

2. Color correction algorithm of spectrum missing based on adaptive parameter fitting model

In line with the colorimetry principle, the relative spectral power distribution of the illuminants $l(\lambda )$, the spectral tristimulus function ${\{ }\bar{x}{(}\lambda {),}\bar{y}{(}\lambda {),}\bar{z}{(}\lambda {)\} }$ and the spectrum of the object surface $r(\lambda )$, as well as the tristimulus value $\{ X,Y,Z\} $ of object color was obtained by integral operation combined with adjustment factor K and wavelength sampling interval $d(\lambda )$, as shown in Eq. (1).

$$\begin{array}{l} X\textrm{ = }K\int {l(\lambda )r(\lambda )\bar{x}(\lambda )} d\lambda ;\\ Y\textrm{ = }K\int {l(\lambda )r(\lambda )\bar{y}(\lambda )} d\lambda ;\\ Z\textrm{ = }K\int {l(\lambda )r(\lambda )\bar{z}(\lambda )} d\lambda ; \end{array}$$

To calculate a correction algorithm suitable for all the colors, the standard color card with patches uniformly distributed in color space and large amount of data should be selected as the reference object. Therefore, this study employed the standard color card Munsell 1269 to calculate and verify the Error Model and the Correction Model.

For Munsell 1269 color card, Eq. (2) is used to calculate the average difference between the tristimulus values in spectral reflectance range of 400-700 nm with the missing step of 10 nm and the tristimulus values of the complete spectrum as the error function.

$$\begin{aligned} f{(\bar{X})^{\prime}_\delta } &= \frac{{\sum\limits_{n = 1}^{1269} {\left[ {\int_{400}^{700} {l(\lambda )r(\lambda )} \bar{x}(\lambda )d\lambda - \left( {\int_{400}^{{\delta_s}} {l(\lambda )r(\lambda )} \bar{x}(\lambda )d\lambda } \right) - \left( {\int_{{\delta_e}}^{700} {l(\lambda )r(\lambda )} \bar{x}(\lambda )d\lambda } \right)} \right]} }}{{1269}}\\ f{(\bar{Y})^{\prime}_\delta } &= \frac{{\sum\limits_{n = 1}^{1269} {\left[ {\int_{400}^{700} {l(\lambda )r(\lambda )} \bar{y}(\lambda )d\lambda - \left( {\int_{400}^{{\delta_s}} {l(\lambda )r(\lambda )} \bar{y}(\lambda )d\lambda } \right) - \left( {\int_{{\delta_e}}^{700} {l(\lambda )r(\lambda )} \bar{y}(\lambda )d\lambda } \right)} \right]} }}{{1269}}\\ f{(\bar{Z})^{\prime}_\delta } &= \frac{{\sum\limits_{n = 1}^{1269} {\left[ {\int_{400}^{700} {l(\lambda )r(\lambda )} \bar{z}(\lambda )d\lambda - \left( {\int_{400}^{{\delta_s}} {l(\lambda )r(\lambda )} \bar{z}(\lambda )d\lambda } \right) - \left( {\int_{{\delta_e}}^{700} {l(\lambda )r(\lambda )} \bar{z}(\lambda )d\lambda } \right)} \right]} }}{{1269}} \end{aligned}$$
where $\delta $ is the missing spectrum part, ${\delta _s}$ represents the band where the missing spectrum starts and ${\delta _e}$ indicates the band where the missing spectrum ends.

In Fig. 4, red, green, and blue represent the error curve of $f(\bar{X})^{\prime}$, $f(\bar{Y})^{\prime}$ and $f(\bar{Z})^{\prime}$ respectively. The black line segment stands for the fitting function of the error curve. The expression of the fitting function is shown in Eq. (3). Due to the $f(\bar{X}^{\prime})$ particularity of the error value, the error model is processed piecewise.

$$\left\{ {\begin{array}{c} {f(\bar{X}^{\prime}) = \frac{1}{{\sqrt {2\pi } \times 4}}{e^{ - \frac{{{{[\frac{{(\bar{X} - 390)}}{{10}} - 5.7]}^2}}}{{2 \times {4^2}}}}} \times 4.4061\textrm{, (400} \le \bar{X} \le 50\textrm{0) }}\\ {f(\bar{X}^{\prime}) = \frac{1}{{\sqrt {2\pi } \times 12.2}}{e^{ - \frac{{{{[\frac{{(\bar{X} - 390)}}{{10}} - 20.107]}^2}}}{{2 \times {{12.2}^2}}}}} \times 22.7\mathrm{,\ (500\ < }\bar{X} \le 70\textrm{0)}} \end{array}} \right.$$
where the fitting equation of $f(\bar{Y}^{\prime})$ is shown as Eq. (4).
$$f(\bar{Y}^{\prime}) = \frac{1}{{\sqrt {2\pi } \times 19.2}}{e^{ - \frac{{{{[\frac{{(\bar{Y} - 375)}}{5} - 16.3]}^2}}}{{2 \times {{19.2}^2}}}}} \times \textrm{26}\textrm{.8 , (400} \le \bar{Y} \le \textrm{700)}$$
where the fitting equation of $f(\bar{Z}^{\prime})$ is displayed as Eq. (5).
$$f(\bar{Z}^{\prime}) = \frac{1}{{\sqrt {2\pi } \times 4.94}}{e^{ - \frac{{{{[\frac{{(\bar{Z} - 375)}}{5} - 6.1]}^2}}}{{2 \times {{4.94}^2}}}}} \times \textrm{27, (400} \le \bar{Z} \le \textrm{700)}$$

When a channel is missing, the combination of Eqs. (3)–(5) and the adaptive parameter h can correct the X, Y, and Z values of the color. When multiple channel combinations are missing, the approximate error result is determined by the area enclosed by the channel interval, the x-axis, and the error fitting curve. By means of integration, the calculated area is taken as the approximate error result. The final approximate error result is adjusted with the adaptive parameter value h.

 figure: Fig. 4.

Fig. 4. Error function fitting result.

Download Full Size | PDF

The complete missing spectral reflectance correction model of $f(\bar{x})$ is shown in Eq. (6),

$$\left\{ {\begin{array}{c} {f(\bar{X}) = X^{\prime} + 4.4061 \times h\int_{{\delta_s}}^{{\delta_e}} \begin{array}{l} \frac{1}{{\sqrt {2\pi } \times 4}}{e^{ - \frac{{{{[\frac{{(\bar{x} - 375)}}{5} - 5.7]}^2}}}{{2 \times {4^2}}}}}\textrm{d}\bar{x}\textrm{, (400} \le {\delta_s} \le 50\textrm{0)}\\ \textrm{if }{\delta_e} > 500\textrm{ }{\delta_e} = 500\textrm{ } \end{array} \textrm{ }}\\ {f(\bar{X}) = X^{\prime} + 22.7 \times h\int_{{\delta_s}}^{{\delta_e}} \begin{array}{l} \frac{1}{{\sqrt {2\pi } \times 12.2}}{e^{ - \frac{{{{[\frac{{(\bar{x} - 375)}}{5} - 20.107]}^2}}}{{2 \times {{12.2}^2}}}}}\textrm{d}\bar{x}\textrm{, (500} \le {\delta_e} \le 700\textrm{)}\\ \textrm{if }{\delta_s} < 500\textrm{ }{\delta_s} = 500\textrm{ } \end{array} } \end{array}} \right.$$

The missing spectral reflectance correction model of $f(\bar{Y})$ is shown as Eq. (7).

$$f(\bar{Y}) = Y^{\prime} + \textrm{26}\textrm{.8} \times h\int_{{\delta _s}}^{{\delta _e}} {\frac{1}{{\sqrt {2\pi } \times 19.2}}{e^{ - \frac{{{{[\frac{{(\bar{y} - 375)}}{5} - 16.3]}^2}}}{{2 \times {{19.2}^2}}}}}\textrm{d}\bar{y}\textrm{, (400} \le {\delta _s} < {\delta _e} \le \textrm{700)}}$$

The missing spectral reflectance correction model of $f(\bar{Z})$ is written as Eq. (8).

$$f(\bar{Z}) = Z^{\prime} + \textrm{27} \times h\int_{{\delta _s}}^{{\delta _e}} {\frac{1}{{\sqrt {2\pi } \times 4.94}}{e^{ - \frac{{{{[\frac{{(\bar{Z} - 375)}}{5} - 6.1]}^2}}}{{2 \times {{4.94}^2}}}}}\textrm{d}\bar{z}\textrm{, (400} \le {\delta _s} < {\delta _e} \le \textrm{700)}}$$

For the error models with different colors, although the trend is similar to the mean in Fig. 4, the error values are different. As shown in Fig. 5, the three color blocks all have the trend of $\bar{x}$ in Fig. 4, but with different spectral reflectance energy. For example, when color block 1 has greater spectral reflectance for each channel in the visible range than color block 2 and color block 3, it has higher energy.

 figure: Fig. 5.

Fig. 5. Error images of different colors

Download Full Size | PDF

When calculating the color difference of color blocks with different energy levels, it is necessary to introduce the energy entropy equation [35] to calculate the expression of the adaptive parameter h, as shown in Eq. (9).

$$h = \frac{{\sum\limits_{i = {\delta _s}}^{{\delta _e}} {[{({c_i} + {{\bar{c}}_i}){{\log }_{12}}({c_i} + {{\bar{c}}_i}) - {c_i}{{\log }_{12}}{c_i} - {{\bar{c}}_i}{{\log }_{12}}{{\bar{c}}_i}} ]} }}{{({\delta _s} - {\delta _e})/10 + 1}}$$

The parameter i represents the spectral reflectance band, which increments by 10. The parameter c represents the spectral reflectance value in band i. The value of the parameter h adaptively adjusts the color difference parameter in accordance with the existing spectral reflectance, which not only increases the color difference model of low spectral energy color block by the coefficient, but also reduces the color difference model with too high spectral energy.

3. Experiment

3.1 Error model

Before demonstrating the accuracy of the spectral missing color correction algorithm of the adaptive parameter fitting model, the proposed error model needs to be verified. The final error value should be calculated by combining the actual calculated error value and the adaptive parameter h in the same way as Eqs. (6)–(8). The accuracy of the error model is mainly verified from two aspects including color card and hyperspectral image. The X-rite Color Checker Classic 24-color standard color card is used, as displayed in Fig. 6(a). Hyperspectral image is ICVL (Interdisciplinary Computational Vision Laboratory) hyperspectral data set. Due to the high resolution of hyperspectral images (1392${\times}$1300), 15 hyperspectral images are used in the data set, as shown in Fig. 6(b). For each image, 1,000 pixels are taken for verification.

 figure: Fig. 6.

Fig. 6. Data set.

Download Full Size | PDF

The error model was verified by DE2000 color difference index, which mainly measures the color difference between the color generated by the error model and the actual color of the specified missing channel. It indicates the smaller the index value, the closer the two color values. The color difference of X-rite Color Checker Classic 24-color standard color card is shown in Heat Map 7 (D65/10°).

The $\textrm{X}$ axis in Heat Map Fig. 7 represents the missing spectral reflectance wavelength, and the $\textrm{Y}$ axis indicates 24 different colors on the color card. The serial number of color blocks is shown in Fig. 6(a). Each color block represents the DE2000 color difference between the calculated color and the color estimated by the error model after the spectral reflectance band is missing. Clearly, the error model can accurately estimate the error value without the spectrum of the X-rite Color Checker Classic 24. When the spectrum is at 450 nm and 460 nm, the color difference is large due to the cumulative error.

 figure: Fig. 7.

Fig. 7. Accuracy of X-RITE Color Checker Classic Error Model.

Download Full Size | PDF

Similarly, the color cast caused by the error model and the actual missing spectrum was verified on the hyperspectral image, and the difference between the color gamut information of the actual missing spectrum and the color gamut of the error model is reflected by the color gamut cast graph. As shown in Fig. 8, 1000 random values in the CIE 1931 color gamut graph form the largest triangle area(D65/10°).

 figure: Fig. 8.

Fig. 8. Color gamut cast map.

Download Full Size | PDF

In the figure, the triangle position enclosed by the two triangles almost remains the same. The average color difference values of missing channels in 1000 random sampling points are calculated, as listed in Table 1.

Tables Icon

Table 1. Average color difference of each missing channel value.

3.2 Correction model

After verifying the accuracy of the estimated error value of the error model, simulation experiments were performed to confirm the accuracy of the spectral missing color correction model of the adaptive parameter fitting model from three aspects including color card, hyperspectral image and real hyperspectral lidar echo data. Because both the color card and hyperspectral image have the color information of the original image, the accuracy of the color correction model can be obtained by comparing the color information of the original image using simulation. However, the echo signal data of hyperspectral lidar does not have the Ground Truth of the color of the real object surface. As a result, it can only be compared with the color correction of the original object under the fixed illuminants subjectively by observation, and objective analysis is not applicable.

In the simulation experiment, some spectral reflectance (the specified channel value is 0) was removed to simulate spectral channel information missing in the real situation. At first, the missing of a single channel was simulated. For the loss of a single spectral reflectance channel with a step of 10 nm in 400-700 nm and six algorithms recovering the correct color, GBDT (Gradient Boosting Decision Tree) [25], Bi-Inversed Gaussian Model [36], Interpolation-based [37] of partial missing spectral algorithm, U-net-based [38] and KNN-based [39] of traditional color correction algorithms were reconstructed. The experimental results are shown in Fig. 9. The proposed color correction method exhibits obvious advantages in correcting the color cast of the single channel missing in 400-700 nm spectral reflectance bands(D65/10°). The dataset is the ICVL hyperspectral dataset, X-rite ColorChecker Classic 24, X-rite ColorChecker 140, and Munsell 1269 color card are obtained by simulation calculation.

 figure: Fig. 9.

Fig. 9. Color difference of single channel missing color correction.

Download Full Size | PDF

Meanwhile, the error correction of multi-channel was also verified based on the above simulation experiments. The main simulation of multi-channel lacks three parts of channel information, namely, 400-460 nm, 530-570 nm and 670-700 nm. The three bands were selected because the values of R, G, B three primary colors specified by CIE were in the three intervals, which are the most likely lost parts in the acquisition of spectral reflectance by hyperspectral LiDAR. In addition, a comparative experiment of color correction of multi-channel missing was carried out on X-rite Color Checker Classic 24 color card. The experimental results are presented in Supplement 1, Fig. S1. (D65/10°)

To eliminate the small sample size of X-rite Color Checker Classic 24, there is statistical bias. The experiment was also carried out for Munsell 1269 color card, as shown in Supplement 1, Fig. S2. (D65/10°)

In this study, DE2000 color difference equation was used to compare two colors of the corresponding color blocks of color cards. Finally, the average color difference between the corresponding 1293 color blocks is shown in Table 2. (D65/10°)

Tables Icon

Table 2. Average color difference of 1293 color blocks.

Based on the experiment of X-rite Color Checker Classic and Munsell 1269 color card, it is verified that the spectral missing color correction algorithm of the proposed adaptive parameter fitting model exhibits higher accuracy than the above five methods in color block correction. At the same time, it is generally estimated that the average color difference between the color block and the primary color block corrected by the proposed algorithm is lower than 5, which can satisfy the needs of target color recognition and human eye perception in various application scenarios.

The same experimental method was also applied to hyperspectral images. This experiment aims to verify whether the proposed correction algorithm can be effectively applied to hyperspectral images. The experimental data are selected from 24 hyperspectral images with RGB reference images. The comparison results of some experiments are provided in Supplement 1, Fig. S3. (D65/10°)

Since there existed reference Ground Truth images, PSNR and SSIM image quality evaluation indexes were also used in the algorithm evaluation. In addition, the above six color correction methods were evaluated. The evaluation results are the average values of all indexes of the selected 24 images, and the evaluation results are presented in Table 3. The upper part of the DE2000 is the Mean, and the lower part is the Max. (D65/10°)

Tables Icon

Table 3. Experimental results of 24 ICVL hyperspectral data sets.

As shown in Table 3, the proposed color correction algorithm has higher accuracy of color restoration and can provide higher-quality hyperspectral images compared with the other five color correction algorithms for the color cast hyperspectral images caused by channel missing.

Finally, the proposed algorithm was applied to the echo signal processing of real hyperspectral lidar, and the specific parameters of lidar operation are presented as follows. With the repetition rate of 0.1–1 mhz and the pulse duration of 4 ns, the output power of the supercontinuum laser source is greater than 6w. The spectral range of the laser is 400–2400 nm. To consider the data quality of backscattered signals, a high-precision two-dimensional scanning platform is adopted. Besides, it is equipped with off-axis parabolic mirror of high reflectance and Schmidt-Cassegrain telescope with focal length and diameter of 400 mm and 200 mm. The collected signal is guided by fiber to a grating spectrometer. To accurately project the spectral signals of the detectors, we adopted a 150 g/mm blazed grating with high spectroscopic efficiency. The 32-element photosensitive photomultiplier tube array with a wavelength of 300–920 nm is selected as the received sensor. The collected standard diffuse reflection whiteboard channel energy values were used as the peak value of the traditional normalization method. The experiment results are shown in Fig. 10. (D65/10°)

 figure: Fig. 10.

Fig. 10. Experimental results of ground objects of 7 colors.

Download Full Size | PDF

As shown in Fig. 10, the spectral missing color correction algorithm of the proposed adaptive parameter fitting model can be used in the real working environment of hyperspectral lidar, and the surface color of the ground object with 7 colors processed by this algorithm is more realistic. Clearly, the color is closer to that of the real object. However, the color cast caused by mixed phase elements cannot be accurately restored, and this problem also deserves in-depth study.

3.3 Limitations and prospects

The above experiments were carried out in simulation and laboratory environment without considering the negative influence of back scattering intensity of high-precision spectrum acquisition. During the long-distance transmission process in the atmosphere, the laser is attenuated to a certain extent. Due to Rayleigh Scattering and Blue light scattering, the color correction based on spectral signals in actual scenes is more complicated, making color correction more difficult.

At the same time, this study does not consider that the correction performance will decrease when the missing spectrum explains the whole spectrum information. This is because the mathematical model expressions of the missing spectrum number and the correct color accuracy rate have not been established, and the expressions of the missing spectrum number and color cast cannot be estimated.

Finally, since the real hyperspectral lidar has the problem of mixed pixel, and the light spot is at the junction of two or more colors, this study fails to accurately correct the color for such spectral reflectance.

In this study, a color cast correction method was proposed. The spectral range of the missing part needs to be known, and the signal-to-noise ratio of the un-missing spectral signal needs to be as high as possible. Under such situation, the color cast caused by spectrum missing can be accurately corrected in laboratory and simulation environment.

4. Conclusions

This study proposed a color correction algorithm of spectral missing based on adaptive parameter fitting model, which can directly convert some missing spectral signals into accurate color information. At first, the accuracy of the error model was confirmed through the simulation experiment of X-rite Color Checker Classic 24 and hyperspectral image data points, and thus the color cast caused by channel missing can be accurately calculated. Second, the accuracy of the color correction model was verified by X-rite Color Checker Classic 24, 1269 Munsell color card and ICVL hyperspectral image data set. Compared with traditional methods, the color difference between the color block corrected by the proposed algorithm and the standard color block is smaller. Meanwhile, the hyperspectral image after color correction has better performance in various image quality evaluation indexes. Finally, it is verified by the actual echo signal data of hyperspectral lidar that the color correction algorithm can also achieve good results in practical application scenarios. Partial spectrum missing is demonstrated to show some rules, according to which the color cast can be accurately restored. Moreover, this also provides an important practical foundation for 3D real scene reconstruction, smart city construction and real-time ecological monitoring based on hyperspectral lidar echo signals.

Funding

National Natural Science Foundation of China (61275172, 61575147).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are available in Ref. [22] and Ref. [25].

Supplemental document

See Supplement 1 for supporting content.

References

1. M. Kristina, Y. Kyrollos, and A. Neerja, “Spectral DiffuserCam: lensless snapshot hyperspectral imaging with a spectral filter array,” Optica 7(10), 1298–1307 (2020). [CrossRef]  

2. E. Roshan and B. Funt, “Computational color prediction versus least-dissimilar matching,” J. Opt. Soc. Am. A 35(4), B292–298 (2018). [CrossRef]  

3. C. Bo, H. Lu, and D. Wang, “Spectral-spatial k-nearest neighbor approach for hyperspectral image classification,” Multimedia Tools Appl. 77(9), 10419–10436 (2018). [CrossRef]  

4. S. A. Burns, “Numerical methods for smoothest reflectance reconstruction,” Color Res. Appl. 45(1), 8–21 (2020). [CrossRef]  

5. S. Kaasalainen, M. Åkerblom, O. Nevalainen, T. Hakala, and M. Kaasalainen, “Uncertainty in multispectral lidar signals caused by incidence angle effects,” Interface Focus 8(2), 20170033 (2018). [CrossRef]  

6. J. Sun, S. Shi, J. Yang, B. Chen, W. Gong, L. Du, F. Mao, and S. Song, “Estimating leaf chlorophyll status using hyperspectral lidar measurements by PROSPECT model inversion,” Remote Sens. Environ. 212, 1–7 (2018). [CrossRef]  

7. X. Zhao, S. Shi, J. Yang, W. Gong, J. Sun, B. Chen, K. Guo, and B. Chen, “Active 3D imaging of vegetation based on multi-wavelength fluorescence LiDAR,” Sensors 20(3), 935 (2020). [CrossRef]  

8. W. Lixia, S. Aditya, H. J. Yngve, and W. Xiaoxia, “Optimized light source spectral power distribution for RGB camera based spectral reflectance recovery,” Opt. Express 29(16), 24695–24713 (2021). [CrossRef]  

9. H. Zheng, C. Wei, L. Qiang, W. Yu, P. Michael R., L. Ying, and J. Liang, “Towards an optimum colour preference metric for white light sources: a comprehensive investigation based on empirical data,” Opt. Express 29(5), 6302–6319 (2021). [CrossRef]  

10. J. Liang, X. Wan, Q. Liu, L. Chan, and J. Li, “Research on filter selection method for broadband spectral imaging system based on ancient murals,” Color Res. Appl. 41(6), 585–595 (2016). [CrossRef]  

11. A. Pelagotti, A. D. Mastio, A. D. Rosa, and A. Piva, “Multispectral imaging of paintings,” IEEE Signal Process 25(4), 27–36 (2008). [CrossRef]  

12. C. S. Chane, A. Mansouri, F. S. Marzani, and F. Boocks, “Integration of 3D and multispectral data for cultural heritage applications: survey and perspectives,” Image and Vision Computing 31(1), 91–102 (2013). [CrossRef]  

13. M. M. Amiri and M. D. Fairchild, “A strategy toward spectral and colorimetric color reproduction using ordinary digital cameras,” Color Res. Appl. 43(5), 675–684 (2018). [CrossRef]  

14. M. Khorasaninejad and F. Capasso, “Metalenses: versatile multifunctional photonic components,” Science 358(6367), 1–10 (2017). [CrossRef]  

15. X. Li, T. Z. Jie, and F. Nicholas X, “Grayscale stencil lithography for patterning multispectral color filters,” Optica 7(9), 1154–1161 (2020). [CrossRef]  

16. L. Jinxing, X. Kaida, P. Michael R., W. Xiaoxia, and L. Changjun, “Spectra estimation from raw camera responses based on adaptive local-weighted linear regression,” Opt. Express 27(4), 5165–5180 (2019). [CrossRef]  

17. W. Lixia, W. Xiaoxia, X. Gensheng, and L. Jinxing, “Sequential adaptive estimation for spectral reflectance based on camera responses,” Opt. Express 28(18), 25830–25842 (2020). [CrossRef]  

18. B. Wang, S. Song, S. Shi, Z. Chen, F. Li, D. Wu, D. Liu, and W. Gong, “Multichannel interconnection decomposition for hyperspectral lidar waveforms detected from over 500 m,” IEEE Trans. Geosci. Remote Sens. 60, 1–14 (2022). [CrossRef]  

19. Q. Xu, Y. Jian, S. Shuo, G. Wei, D. Lin, C. Biwu, and C. Bowen, “Analyzing the effect of incident angle on echo intensity acquired by hyperspectral LiDAR based on the Lambert-Beckman model,” Opt. Express 29(7), 11055–11069 (2021). [CrossRef]  

20. C. Biwu, S. Shuo, S. Jia, G. Wei, Y. Jian, D. Lin, G. Kuanghui, W. Binhui, and C. Bowen, “Hyperspectral LiDAR point cloud segmentation based on geometric and spectral information,” Opt. Express 27(17), 24043–24059 (2019). [CrossRef]  

21. F. Mao, L. Xi, S. Jie, Z. Liang, W. Gong, and W. Chen, “Simulation and retrieval for spaceborne aerosol and cloud high spectral resolution lidar of China,” Sci. China 65(3), 570–583 (2022). [CrossRef]  

22. B. Arad and O. Ben-Shahar, “Sparse recovery of hyperspectral signal from natural rgb images,” Comput. Vis. – ECCV Springer 9911, 19–34 (2016). [CrossRef]  

23. B. Wang, S. Song, W. Gong, X. Cao, D. He, Z. Chen, X. Lin, F. Li, and J. Sun, “Color restoration for full-waveform multispectral LiDAR data,” Remote Sens. 12(4), 593–611 (2020). [CrossRef]  

24. B. Chen, S. Shi, W. Gong, J. Sun, B. Chen, L. Du, J. Yang, K. Guo, and X. Zhao, “True-color three-dimensional imaging and target classification based on hyperspectral LiDAR,” Remote Sens. 11(13), 1541 (2019). [CrossRef]  

25. W. Tengfeng, W. Xiaoxia, C. Bowen, and S. Shuo, “True-color reconstruction based on hyperspectral LiDAR echo energy,” Remote Sens. 13(15), 2854–2869 (2021). [CrossRef]  

26. H. Luo and X. Wan, “Estimating polynomial coefficients to correct improperly white-balanced sRGB images,” IEEE Signal Processing Letters 28, 1709–1713 (2021). [CrossRef]  

27. S.-M. Woo, S.-H. Lee, J.-S. Yoo, and J.-O. Kim, “Improving color constancy in an ambient light environment using the phong reflection model,” IEEE Trans. Image Process. 27(4), 1862–1877 (2018). [CrossRef]  

28. Y. Qian, J.-K. Kamarainen, J. Nikkanen, and J. Matas, “On finding gray pixels,” IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 8054–8062 (2019).

29. Z. Zhao, H.-M. Hu, H. Zhang, F. Chen, and Q. Guo, “Improving color constancy using chromaticity-line prior,” IEEE Transactions on Multimedia, 1–15 (2022).

30. F. Laakom, N. Passalis, J. Raitoharju, J. Nikkanen, A. Tefas, A. Iosifidis, and M. Gabbouj, “Bag of color features for color constancy,” IEEE Trans. Image Process. 29, 7722–7734 (2020). [CrossRef]  

31. M. Buzzelli, J. van de Weijer, and R. Schettini, “Deep dichromatic guided learning for illuminant estimation,” IEEE Transactions on Image Processing 30, 3623–3636 (2021). [CrossRef]  

32. Y.-C. Lo, C.-C. Chang, H.-C. Chiu, Y.-H. Huang, C.-P. Chen, Y.-L. Chang, and K. Jou, “CLCC: contrastive learning for color constancy,” IEEE/CVF Conf. Comput. Vis. Pattern Recognit. 20(25), 8049–8059 (2021).

33. M. Afifi, J. T. Barron, C. LeGendre, Y.-T. Tsai, and F. Bleibel, “Cross-camera convolutional color constancy,” IEEE/CVF Int. Conf. Comput. Vis. 10(17), 1961–1970 (2021). [CrossRef]  

34. M. Afifi, P. Abhijith, F. Graham, and B. Michael S, “As-projective-as-possible bias correction for illumination estimation algorithms,” J. Opt. Soc. Am. A 36(1), 71–78 (2019). [CrossRef]  

35. Z. Qu, T. Wang, S. An, and L. Ling, “Image seamless stitching and straightening based on the image block,” IET Image Process. 12(8), 1361–1369 (2018). [CrossRef]  

36. L. Xuan, Z. Ye, T. Yidan, and D. Zhaolun, “Estimation of vegetation water content based on bi-inverted gaussian fitting spectral feature analysis using hyperspectral data,” Remote Sens. Technol. Appl. 31(6), 1075–1082 (2016). [CrossRef]  

37. L. Zou and S. Tang, “A new approach to general interpolation formulae for bivariate interpolation,” Abstract Appl. Anal. 2014, 1–11 (2014). [CrossRef]  

38. M. Afifi, B. Price, S. Cohen, and M. S. Brown, “When color constancy goes wrong: correcting improperly white balanced images,” IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 1535–1544 (2020).

39. M. Afifi and M. S. Brown, “Deep white-balance editing,” IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 1394–1403 (2020).

Supplementary Material (1)

NameDescription
Supplement 1       Result

Data availability

Data underlying the results presented in this paper are available in Ref. [22] and Ref. [25].

22. B. Arad and O. Ben-Shahar, “Sparse recovery of hyperspectral signal from natural rgb images,” Comput. Vis. – ECCV Springer 9911, 19–34 (2016). [CrossRef]  

25. W. Tengfeng, W. Xiaoxia, C. Bowen, and S. Shuo, “True-color reconstruction based on hyperspectral LiDAR echo energy,” Remote Sens. 13(15), 2854–2869 (2021). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Schematic diagram of color reproduction of object surface. (D65/10°)
Fig. 2.
Fig. 2. Comparison of color cast of missing channels. (D65/10°)
Fig. 3.
Fig. 3. Color difference caused by partial spectral band missing.
Fig. 4.
Fig. 4. Error function fitting result.
Fig. 5.
Fig. 5. Error images of different colors
Fig. 6.
Fig. 6. Data set.
Fig. 7.
Fig. 7. Accuracy of X-RITE Color Checker Classic Error Model.
Fig. 8.
Fig. 8. Color gamut cast map.
Fig. 9.
Fig. 9. Color difference of single channel missing color correction.
Fig. 10.
Fig. 10. Experimental results of ground objects of 7 colors.

Tables (3)

Tables Icon

Table 1. Average color difference of each missing channel value.

Tables Icon

Table 2. Average color difference of 1293 color blocks.

Tables Icon

Table 3. Experimental results of 24 ICVL hyperspectral data sets.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

X  =  K l ( λ ) r ( λ ) x ¯ ( λ ) d λ ; Y  =  K l ( λ ) r ( λ ) y ¯ ( λ ) d λ ; Z  =  K l ( λ ) r ( λ ) z ¯ ( λ ) d λ ;
f ( X ¯ ) δ = n = 1 1269 [ 400 700 l ( λ ) r ( λ ) x ¯ ( λ ) d λ ( 400 δ s l ( λ ) r ( λ ) x ¯ ( λ ) d λ ) ( δ e 700 l ( λ ) r ( λ ) x ¯ ( λ ) d λ ) ] 1269 f ( Y ¯ ) δ = n = 1 1269 [ 400 700 l ( λ ) r ( λ ) y ¯ ( λ ) d λ ( 400 δ s l ( λ ) r ( λ ) y ¯ ( λ ) d λ ) ( δ e 700 l ( λ ) r ( λ ) y ¯ ( λ ) d λ ) ] 1269 f ( Z ¯ ) δ = n = 1 1269 [ 400 700 l ( λ ) r ( λ ) z ¯ ( λ ) d λ ( 400 δ s l ( λ ) r ( λ ) z ¯ ( λ ) d λ ) ( δ e 700 l ( λ ) r ( λ ) z ¯ ( λ ) d λ ) ] 1269
{ f ( X ¯ ) = 1 2 π × 4 e [ ( X ¯ 390 ) 10 5.7 ] 2 2 × 4 2 × 4.4061 , (400 X ¯ 50 0)  f ( X ¯ ) = 1 2 π × 12.2 e [ ( X ¯ 390 ) 10 20.107 ] 2 2 × 12.2 2 × 22.7 ,   ( 500   < X ¯ 70 0)
f ( Y ¯ ) = 1 2 π × 19.2 e [ ( Y ¯ 375 ) 5 16.3 ] 2 2 × 19.2 2 × 26 .8 , (400 Y ¯ 700)
f ( Z ¯ ) = 1 2 π × 4.94 e [ ( Z ¯ 375 ) 5 6.1 ] 2 2 × 4.94 2 × 27, (400 Z ¯ 700)
{ f ( X ¯ ) = X + 4.4061 × h δ s δ e 1 2 π × 4 e [ ( x ¯ 375 ) 5 5.7 ] 2 2 × 4 2 d x ¯ , (400 δ s 50 0) if  δ e > 500   δ e = 500     f ( X ¯ ) = X + 22.7 × h δ s δ e 1 2 π × 12.2 e [ ( x ¯ 375 ) 5 20.107 ] 2 2 × 12.2 2 d x ¯ , (500 δ e 700 ) if  δ s < 500   δ s = 500  
f ( Y ¯ ) = Y + 26 .8 × h δ s δ e 1 2 π × 19.2 e [ ( y ¯ 375 ) 5 16.3 ] 2 2 × 19.2 2 d y ¯ , (400 δ s < δ e 700)
f ( Z ¯ ) = Z + 27 × h δ s δ e 1 2 π × 4.94 e [ ( Z ¯ 375 ) 5 6.1 ] 2 2 × 4.94 2 d z ¯ , (400 δ s < δ e 700)
h = i = δ s δ e [ ( c i + c ¯ i ) log 12 ( c i + c ¯ i ) c i log 12 c i c ¯ i log 12 c ¯ i ] ( δ s δ e ) / 10 + 1
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.