Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Estimation of daylight spectral power distribution from uncalibrated hyperspectral radiance images

Open Access Open Access

Abstract

This paper introduces a novel framework for estimating the spectral power distribution of daylight illuminants in uncalibrated hyperspectral images, particularly beneficial for drone-based applications in agriculture and forestry. The proposed method uniquely combines image-dependent plausible spectra with a database of physically possible spectra, utilizing an image-independent principal component space (PCS) for estimations. This approach effectively narrows the search space in the spectral domain and employs a random walk methodology to generate spectral candidates, which are then intersected with a pre-trained PCS to predict the illuminant. We demonstrate superior performance compared to existing statistics-based methods across various metrics, validating the framework’s efficacy in accurately estimating illuminants and recovering reflectance values from radiance data. The method is validated within the spectral range of 382–1002 nm and shows potential for extension to broader spectral ranges.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Multi- and hyperspectral imaging have gained significant scientific interest in a variety of fields over recent decades, notably in satellite-based earth observation [1], agricultural surveillance [2], food quality assessment [3], and cultural heritage preservation [4]. A fundamental pre-processing step in these applications involves converting the captured radiance data into reflectance values. This is traditionally accomplished by using a reference target as ground truth, a method which presents challenges particularly in uncontrolled outdoor settings due to weather factors, limited site accessibility and/or the size of the area of interest [5].

This study introduces a novel approach to estimate the spectral power distribution (visible and near infrared) of the main illuminant in uncalibrated hyperspectral images, focusing on outdoor scenes where sunlight is the dominant illumination. It leverages image statistics and employs constraints to tackle the ill-posed problem of illuminant estimation. We demonstrate that the proposed approach outperforms state-of-the-art methods in terms of several well established metrics. This work shifts the focus from existing estimation methods, which often rely on specific scene content, to a more robust, scene-independent methodology. The research stands out in two key aspects: the use of an image-independent principal component space and an innovative algorithm combining information from both image-dependent and physically plausible spectra.

2. Related work

Illuminant estimation methods can be broadly classified into four categories. The target-based approach relies on a calibration object present in the scene, such as a Macbeth ColorChecker. Image statistics-based methods leverage image-dependent variables in tandem with statistical frameworks. Physics-based methods model scene-specific physical properties for illumination retrieval. Lastly, learning-based approaches employ machine learning models such as neural networks.

The standard method in illuminant estimation to date is the target-based approach, relying on a reference tile with known reflectance such as a Spectralon or alternatively more cost-effective materials like Teflon [68]. Two essential properties for this tile include a Lambertian surface for uniform reflection regardless of viewing angle [9], and near 100% reflectance across the spectrum of interest. Placement and angle of the tile should approximate that of the object being measured. Common practice in airborne or UAV-based applications is to capture reference data before and after the flight [1012]. Exposure time and other parameters should be optimized to maximize sensor response while avoiding saturation, given that sensor behaviour becomes nonlinear near saturation [11,13,14]. The flat-fielding equation serves to calculate reflectance values from raw sensor data, incorporating dark current intensity and a correction factor for tiles with reflectance factors below 1.0 [7]. Drawbacks of this approach become apparent in uncontrolled environments with changing illumination conditions, demanding multiple reference captures for accuracy [5,15]. Though automation solutions exist [16], they still require manual setup and are ill-suited for most scenarios. Challenges also arise in airborne systems, particularly when it is not possible to properly place a reference target, as in forest canopy measurements [17,18].

Image statistics-based approaches constraint the search space by relying on specific assumptions [19]. A common way to achieve this is through algorithms designed for illuminant chromaticity estimation in color images. These algorithms are rooted in the human visual system’s (HVS) ability for chromatic adaptation and color constancy [2027]. Notable algorithms include the grayworld [28], retinex or max-RGB [29], shades-of-gray [30], and gray-edge [31] algorithms. The Grayworld algorithm adjusts the image colors based on the assumption that the average scene color is gray, effectively neutralizing color casts in uniformly colored scenes. The Max-RGB algorithm operates on the premise that the brightest values in each color channel are influenced by the light source, adjusting these to represent white light and correct the light source’s color effect. The Shades-of-Gray algorithm is a further refinement that generalizes the Grayworld assumption by calculating the average color using a Minkowski norm, allowing for more flexible adjustments across scenes with varying levels of lightness. The Gray-Edge algorithm, an extension of Grayworld, assumes the average edge color in a scene is gray, enhancing color correction in areas with significant textures or edges for more nuanced adjustments. Each of these algorithms employs a unique strategy to improve image color perception, making them suited to different scenarios and lighting conditions. They were all extended for multi-spectral channel estimation [32,33]. Grayworld, for example, is computationally simple but often inaccurate due to its dependency on image content. Similarly, shades-of-gray and max-RGB can be described through the Minkowski norm, with varying parameters for $p$ [30]. Furthermore, spectral variations of these algorithms, particularly max-spectra and spectral gray-edge, have been found to yield the best results [32]. Another distinct method uses a six-channel system and illuminant databases [34], but it suffers from high database dependency.

Physically-based methods for illuminant estimation utilize the physical attributes of a scene to infer its illumination conditions. These methods typically rely on the specular reflections on object surfaces to estimate the Spectral Power Distribution (SPD) of the light source [35,36]. The underlying principle is the dichromatic reflectance model. The hyperspectral image is usually segmented into specular and nonspecular regions [37,38]. The identification of highlight areas is performed through various techniques, such as the simplest form of pixel brightness to more advanced methods like receptive field modelling [38,39]. Post detection, the SPD of the light source is estimated using different strategies, including clustering methods and optimization frameworks [37,38,40]. However, these methods are not without limitations. They often require surfaces with dichromatic properties and may not perform well otherwise [37,38]. Additional constraints involve the need for objects that are both shaded and illuminated [40], or the assumption of convex surfaces [38]. Challenges also arise in accurately detecting specular highlights [41] and in handling scenes mainly composed of vegetation.

Learning-based methods for illuminant estimation, particularly those using neural networks, have gained attention due to developments in deep learning. While abundant research exists in the colorimetric domain [42], fewer works focus on hyperspectral imaging for this application [4345]. The primary challenge is the requirement for a large volume of training data, specifically for hyperspectral images. Unlike RGB images, for which datasets are plentiful [46], hyperspectral data lack standardization and comparable availability. Issues include variable numbers of channels, spectral ranges, and sensor sensitivities [47]. [47] offered a model utilizing a pre-trained Convolutional Neural Networks (CNN) with an architecture similar to [48]. The network was fine-tuned to estimate illuminants using patches from the spectral cube as input. This method is, however, restricted to the visible range (400-700nm) with a resolution of 10nm and is designed to detect only "smooth spectra". The primary focus of this work is on colorimetric estimation rather than spectral, limiting its applicability for objective evaluation in spectral illuminant prediction [47]. The potential of learning-based methods is promising, but current limitations include the need for massive training data, model-specific applicability, and a focus mainly on colorimetric estimates rather than spectral counterparts.

3. Methodology

This work proposes a methodology for estimating the Spectral Power Distribution (SPD) of an illuminant from uncalibrated hyperspectral imagery. Our approach integrates two fundamental components, which are detailed in this section: constraints derived from the image data and a data-driven prior knowledge of SPDs. By defining a probable spectral domain and generating ‘spectral candidates’ that reflect potential SPDs, we merge these with a database of real-world illuminants in a Principal Component Space (PCS). The estimated SPD, derived from the intersection within the PCS, is then applied to convert radiance into reflectance, highlighting the significance of spectral candidates in the estimation process. This approach, anchored in image statistics, distinguishes itself from traditional methods by its reliance on synthesized and empirical data for SPD estimation.

3.1 Defining constraints within the spectral search space

Estimating the SPD of illumination from radiance data is challenging due to the absence of ground truth. The proposed method presumes uniform and diffuse illumination across the captured scene. We acknowledge the assumption of uniform and diffuse illumination might seem restrictive; it is, however, an acceptable approximation of the common environmental conditions encountered in drone-based agricultural and forestry surveillance [49,50]. This assumption simplifies the complex interaction of light with various surfaces, enabling a focused study on the spectral estimation from radiance images. The work primarily focuses on daylight spectra for two reasons: suitability for UAV-based applications where weather and lighting rapidly change [5], and the availability of a larger, varied set of daylight reference data. It is also assumed that no self-emitting objects are present in the scene and that radiance data is radiometrically calibrated and normalized between 0 and 1.

With these assumptions, two main constraints are introduced to reduce the search space:

  • • The upper limit of possible SPD is determined by a uniform spectrum across all wavelengths. This is termed maxConstraint, and it sets the upper boundary for the search area for an unknown illuminant, as depicted in Fig. 1. Additional constraints based on varying illuminants were considered but dismissed to avoid overfitting at this stage.
  • • The lower limit of possible SPD is set by the highest pixel value in each spectral band, which we refer to as minConstraint. Given that the maximum possible response at each wavelength is 1.0, and objects are assumed to be only reflective, this establishes the minimum SPD for the illuminant.

 figure: Fig. 1.

Fig. 1. minConstraint and maxConstraint define the search area for the unknown illuminant SPD.

Download Full Size | PDF

The SPD of an unknown illuminant is assumed to fall within the area defined by these minConstraint and maxConstraint limits.

3.2 Generating the spectral candidates

We generate candidate solutions employing a random walk approach, inspired by Pearson’s formulation [51], which models paths as successive random steps. This stochastic process is useful to explore variable outcomes through random movements. Initially, we select a random $Y$-value for the starting point $(x_1, y_1)$, with $0 \leq j \leq 1$. For each subsequent point up to $x_n$, we determine $y_i$ values by generating a random number $-1 \leq m_i \leq 1$ that dictates the direction and magnitude of movement along the $Y$-axis. This process yields a "rough" spectrum within the desired range, distinct from white noise.

To smooth this spectrum, we apply the LOWESS (Locally Weighted Scatterplot Smoothing) method [52], a non-parametric regression technique that effectively smooths scatterplots for enhanced data analysis. It combines multiple regression models in a k-nearest-neighbor-based meta-model, providing local adaptability and robustness against outliers. Its performance can be adjusted through a single parameter, making it particularly suitable for smoothing daylight spectra, as noted by [53]. The adaptability of LOWESS allows for detailed analysis and smoothing of data with non-linear relationships or varying patterns of dispersion, enhancing the accuracy of our spectral simulations.

Our objective is not to identify the actual illuminant but to create plausible illuminant spectra for further analysis. To this end, we utilize Principal Component Analysis (PCA) for dimensionality reduction, enhancing noise resilience and constructing a data-driven prior. This involves integrating variable reflectance spectra with empirically measured illuminant Spectral Power Distributions (SPDs) to form a Principal Component Space (PCS). This space encompasses both physically plausible illuminant spectra, such as those from the Granada daylight spectral database, and spectral candidates derived from image data. By situating these elements within the PCS, our goal is to refine illuminant estimation through the intersection of these datasets.

We further refine our analysis using the RANSAC algorithm [54] to fit a line to the set of plausible daylight illuminants in the PCS. RANSAC’s iterative approach, which selects random data subsets for model fitting, effectively filters out outliers, ensuring robust and accurate parameter estimations. This process involves iterative linear model fittings, with the model accruing the highest number of inliers chosen to represent the daylight illuminant dataset in the PCS.

Finally, we map the spectral candidates into the PCS, forming a hyperplane that simplifies their representation. By calculating the centroid $C$ and normal vector $N$ of this hyperplane through singular value decomposition of matrix $M$, which represents the spectral candidates, we streamline the depiction of these candidates within the PCS.

A rough estimate of the illuminant is then achieved by calculating the intersection between the line and the hyperplane, both represented in the PCS. To validate this estimate, an $n$-dimensional convex hull enclosing the valid region of the PCS is constructed using the Quickhull algorithm by [55]. Points from the illuminant dataset within this region are added to define the valid region for the estimated illuminant. If $P_{est}$ lies within this region, it is transformed back into the spectral domain. If not, the closest point within the valid region is selected, and this point is likewise transformed back into the spectral domain.

The full process is visualized in Fig. 2, where the red line represents the illuminant dataset, the plane represents the spectral candidates, the red square marks the intersection point between both, while the red cross represents the ground truth illuminant. This solution works well, particularly in the first three dimensions. The obtained rough estimation of the input cubes illuminant is now validated, refined and transformed back into the spectral domain as described below.

 figure: Fig. 2.

Fig. 2. The red line is fitted using RANSAC on the illuminant dataset (green dots), the plane representing and the spectral candidates are shown in gray. The intersection point is marked as red square, whereas the ground truth is shown as red ‘X’.

Download Full Size | PDF

Finally, this transformed point is verified to meet the initial constraints.

3.3 Optimization of the input parameters

To optimize model parameters for accurate illuminant estimation, an optimization process was conducted using the complemented goodness of fit (CGFC) metric [56] (detailed in Section 4.2) due to its effectiveness in comparing overall spectral shapes. Radiance cubes were computed for three test images (excluded from the final test dataset) and 20 randomly selected daylight illuminants. Subsequently, various model configurations were examined by altering several input parameters. The parameters adjusted included the number of spectral candidates with values of 500, 1000, 1500, and 2000; the smoothing method with options of none, Median, and LOWESS; the Median kernel size with values of 3, 5, and 7; the smoothing LOWESS factor ($f$) with values of 0.03, 0.08, and 0.15; and the number of principal components (PC) for reconstruction with values of 3 and 6.

The optimization revealed that 500 spectral candidates often proved insufficient for meaningful volume definition when computing the six-dimensional convex hull during the validation process. This shortfall was especially pronounced in input images where the minConstraint failed to significantly reduce the search area in the spectral domain. In extreme cases, none of the spectral candidates intersected with the set of physically plausible illuminants within the PCS. Ultimately, 1500 spectral candidates were chosen to strike a balance between detailed volume creation for estimation and computational efficiency.

LOWESS smoothing was pitted against a Median filter under varying configurations to evaluate their performance as outlined in Section 3.2. Both algorithms were tested with three smoothing levels. Expectedly, LOWESS yielded a more naturally shaped generated curve while retaining local variation, with a smoothing factor of $f=0.03$ identified as the most effective overall.

4. Experiments

To assess our image statistics-based framework, we tested its ability to estimate illuminant spectra and reconstruct relative reflectance from radiance cubes. We simulated normalized radiance images from known reflectances and daylight illuminants for this purpose. The framework predicted the illuminant spectrum for each cube without a reference target. Evaluations were conducted using full reference metrics including the Root Mean Square Error and the Complemented Goodness-of-Fit Coefficient (detailed in Section 4.2), comparing our method against spectral grayworld, spectral gray-edge, and max-spectral algorithms. Additionally, we estimated Spectral Power Distributions (SPD) to recover relative reflectance, comparing the outcomes to ground truth and applying the same metrics for accuracy assessment.

4.1 Data preparation

4.1.1 Reducing the bias within the illuminant database

The Correlated Color Temperature (CCT) is a common method to characterize daylight, correlating a spectrum to a Planckian radiator’s temperature to indicate perceived color [57,58]. Metamerism, a trait of the human visual system, allows multiple spectra to share the same CCT. The CCT of an SPD is found by comparing its CIE 1931 x,y chromaticity coordinates to a Planckian locus, built within the CIE 1931 chromaticity space [59]. Natural and artificial light sources often align with this locus, and their CCT is determined by the closest point on the locus. A more perceptually meaningful scale, the inverse CCT, represents a light source’s color, converting CCT to reciprocal mega-Kelvin (MK$^{-1}$) for a better classification of irradiance spectra in the VIS range, calculated as $CCT^{-1} = 10^{6} / CCT$.

As stated by the authors [60], the Granada daylight spectral database is biased towards illuminants of an inverse CCT of around 175 to 180MK$^{-1}$. To mitigate this, we did a stratified sampling by segmenting the dataset into regions with a range of 5MK$^{-1}$ each, spanning from 5 to 270MK$^{-1}$. Up to 40 spectra were then randomly selected from the dataset for each region to simulate a more uniform distribution. At the extremes, only a few spectra were available for selection. The result was a dataset of 1326 daylight spectra, which replaces the initial dataset in all calculations that are described from now on.

4.1.2 Simulating radiance cubes for testing

To test the proposed framework a dataset of relative radiance consisting of a total of 150 radiance cubes with a value range from 0 to 1 was calculated. The radiance test cubes were created by combining a set of 25 ground truth reflectance cubes and six representative illuminant spectra, assuming homogeneous illumination conditions. To obtain the reflectance data, 25 radiance cubes were recorded first by using a Cubert Ultris X50 as well as a Cubert Ultris X20 hyperspectral snapshot camera mounted on an UAV. Both cameras are radiometrically calibrated and capture a spectral range of 350 - 1002nm with a spectral sampling of 4nm, resulting in 164 channels with a bit depth of 12 bit. While the Ultris X20 provides a native spatial resolution of 410x410 pixels, the spatial resolution of the Ultris X50 is 570x570 pixels. The image cubes of the Ultris X50 were cropped to 550x550 pixels to remove artifacts that were present in some of the images. Immediately before the capture of each radiance cube, a white and dark calibration cube was recorded on scene with the respective camera. The position and angle of the white tile was approximately the same as the surface to be captured. Special care was taken to avoid measurements under rapid changes in illumination conditions, such as partial cloud cover. For the white calibration, a Spectralon tile with a reflectance factor of $>99{\% }$ over the spectral range of 400 - 1500nm was used as reference. To account for dark current noise, the sensor was covered from any incident light when capturing the dark reference. The calibration measurements were then used to calculate reflectance data from the original radiance measurements. These reflectance values were considered as ground truth reflectances.

Colorimetric thumbnail example images of the reflectance cubes rendered in sRGB using the CIE D65 standard illuminant at a spectral resolution of 5nm and the CIE 1964 color matching functions at 5nm resolution are shown in Fig. 3 below. With a focus on drone-based applications, the images show a variation of vegetation and crops as well as soil, dirt, and tarmac roads as well as other man-made objects like cars and houses. But also several cubes captured under controlled conditions in the lab were included.

 figure: Fig. 3.

Fig. 3. Example images from the test dataset, rendered as sRGB images using CIE standard illuminant D65 and CIE 1964 color matching functions.

Download Full Size | PDF

From the altered illuminant dataset described in Section 4.1.1, six representative spectra were selected based on their inverse CCT (see Fig. 4). These spectra were used along the reflectance cubes for the calculation of radiance test images and were removed from the illuminant dataset. The rest of the illuminant dataset was used to calculate the training data for the PCS, as described in Section 4.1.3. The illuminant data was cropped and linearly interpolated to match the spectral bands recorded by the Cubert Ultris cameras.

 figure: Fig. 4.

Fig. 4. Chosen SPD of the Granada daylight spectral dataset based on their CCT$^{-1}$.

Download Full Size | PDF

4.1.3 Simulating radiance data for training of the PCS

Besides preparing radiance cubes for testing the framework, radiance spectra were simulated to serve as a training dataset to fit the image-independent PCS. The training dataset contains a total of 316.800 radiance spectra. It was created by combining 240 reflectance spectra and a total of 1320 illuminant spectra described in 4.1.1. The reflectance spectra, kindly provided by the University of Granada, were obtained by measuring a Greta McBeth colorChecker DC using a Photoresearch PR745 spectroradiometer within a spectral range of 380 - 1080 nm. The spectra of the Granada daylight spectral database that were not used for creating the radiance test cubes served as SPD for the calculation for all combinations of reflectance and radiance spectra, resulting in 316.800 training spectra in a range of 380 - 1080 nm.

Afterwards, it was assured that all data involved in the illuminant estimation was sharing a common spectral range and spectral resolution. This was necessary to make it possible to fit the common PCS, transfer the input data into it, calculate the intersection and convert the data back to the spectral domain. Therefore, the training and the test dataset as well as the illuminant database were all cropped to a common spectral range of 382 - 1002 nm. Then, the training and illuminant datasets were linearly interpolated to match the spectral bands of the radiance data.

4.2 Evaluation metrics

Various full-reference metrics evaluate the estimated SPD and reflectance values against ground truth data. Root Mean Square Error (RMSE) assesses the average magnitude of differences between estimated and original spectra, with a range of 0 (perfect match) to 1 (worst match), or infinity if data isn’t normalized. Goodness of Fit Coefficient (GFC) compares two spectra, with an adjusted version, the Complemented Goodness-of-Fit Coefficient (CGFC), allowing a direct comparison with RMSE, where 0 indicates a best fit and 1 indicates a worst spectral match. Spectral Angle Mapper (SAM) measures the angle between two spectra treated as n-dimensional vectors, with smaller values indicating more similar spectra, and is insensitive to intensity differences. Integrated Radiance Error (IRE) calculates the sum of absolute values of differences at each wavelength, normalized by the integrated reference spectrum, ranging from 0 (best match) to infinity (worst match), and is sensitive to scale changes.

4.3 Comparison to other statistics-based illumination estimation algorithms

In addition to comparing the estimated illuminant and the recovered reflectance cubes against their respective ground truth data, the estimated SPD is also compared against three other image statistics-based illumination estimation algorithms, namely spectral grayworld, spectral gray-edge and max-spectral as proposed by [32]. Since these methods are adopted from their color constancy algorithm counterparts, they will be referred to as spectral constancy algorithms when talking about them as a set of algorithms. For each of the radiance cubes used as test data for the proposed algorithm, the illumination is estimated using each of the spectral constancy algorithms. Their performance is evaluated by comparison against the ground truth illuminant by using the spectral full reference metrics CGFC, RMSE, IRE and SAM described in Section 4.2.

5. Results

5.1 Estimated illuminants

After estimating the SPD for each of the 150 radiance input cubes in the common PCS and reconstructing the illuminant spectra using three principal components, the error metrics explained in the previous section were computed. The results in terms of the mean CGFC, RMSE, IRE and SAM over all input cubes are shown in Table 1. In addition, the table shows the performance compared to state-of-the-art spectral constancy algorithms.

Tables Icon

Table 1. Results of the estimation of the illuminant SPD using three components for reconstruction from PCS, statistical values for all 150 estimated illuminants of the test dataset.

Figure 5 consists of six plots showing an estimated relative SPD against the corresponding ground truth for each inverse CCT used for the experiments. Below, Table 2 shows the ability of the proposed model to predict daylight illuminants with a certain inverse CCT. In addition, a plot of the trendlines for estimating different inverse CCT is shown in Fig. 6.

 figure: Fig. 5.

Fig. 5. Individual reconstruction results, one for each chosen CCT$^{-1}$. The blue curve represents the ground truth illuminant used for creating the radiance image, the orange line is the reconstructed illuminant from three principal components using the proposed estimation model.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Trendlines for the mean estimation accuracy of the proposed model per CCT$^{-1}$ of all evaluation metrics.

Download Full Size | PDF

Tables Icon

Table 2. Results of the illuminant estimation, subdivided by the CCT$^{-1}$ of the ground truth SPD.

The reconstruction from the PCS back to the spectral domain using three components yielded better results than using at least six components, as recommended by [60].

The results in Table 1 show the proposed model’s strong capability to estimate a wide array of representative daylight SPDs. With a CGFC mean value of 0.02 and a $90^{th}$ percentile value of 0.05, there is a significant correlation between estimated and ground truth spectra. Despite not meeting the ’good’ spectral reproduction threshold of CGFC-value $\leq 0.01$ on average, as defined by [61], the model still achieves a promising best CGFC value of 0.0026. Given the complexities of no-reference illuminant estimation, achieving perfect results across all scenes and conditions is improbable with a singular algorithm.

In terms of RMSE, the algorithm performs well, evidenced by a mean RMSE of 0.1593 and 0.2795 for the $90^{th}$ percentile. This is visually corroborated by Fig. 5.

When compared to other image statistics-based algorithms, the proposed model shows superior performance in most metrics, as outlined in Table 1. Though max-spectral and spectral gray-edge show better peak performance in specific instances, our model improves precision across a broader range of inputs. Notably, it exhibits fewer outliers with large reproduction errors, continuing to deliver the best IRE mean and $90^{th}$ percentile results. One key to this performance is the model’s restriction of the estimated illuminant to a volume defined by the Granada daylight illuminant database, which competing algorithms lack.

An analysis of the SPD estimation results by their inverse CCT indicates higher precision for illuminants with an inverse CCT of 150 $MK^{-1}$ and above. The dataset of illuminants encompasses a broad spectrum of colors and illuminant conditions, and it was cured to reduce bias towards certain color temperatures. Nevertheless, due to the limited availability of reference spectra at the lower end of the CCT$^{-1}$ range, we recognize the possibility of a significant under-representation of illuminants with higher CCTs, which possess a larger ‘blue’ component not typically prevalent in agricultural and forestry applications.

A comparison between results using the first three and the first six principal components reveals a performance decline when using six components. Further refinement in the PCS might improve future estimation accuracy. However, the current work still shows strong performance with only three principal components. The decline in performance observed when using six principal components, compared to three, may be due to overfitting or noise amplification. Further investigation is required to determine more specifically the reason for this observation, which is beyond the scope of this work.

5.2 Reconstructed reflectance cubes

Comparing ground truth reflectance against reconstructed reflectance using the illuminant SPD estimated by the proposed model. First, the reflectance cubes are reconstructed by dividing each radiance cube by its corresponding estimated illuminant. Then, for every individual point spectrum across all pixels within the 150 reflectance cubes (equal to over 45.3 million point spectra), the reconstructed reflectance is compared against the ground truth using the metrics discussed in Section 4.2. From these sets of values, the minimum, mean, maximum, and $90^{th}$ percentile are calculated for each metric to evaluate the overall performance of the reflectance reconstruction. The results are shown in Table 3. Figure 7 then shows a comparison between the errors of the SPD estimation and the reflectance recovery.

 figure: Fig. 7.

Fig. 7. Comparison of trendlines for the mean results in terms of illuminant SPD estimation and reflectance recovery of the proposed model per CCT$^{-1}$ of all evaluation metrics; first three principal components used.

Download Full Size | PDF

Tables Icon

Table 3. Results of the relative reflectance reconstruction. The results are calculated using all 45.3 million spectra of the 150 test cubes.

The evaluation of the reflectance recovery in this Section shows excellent reconstruction qualities in the best cases with a minimum CGFC value of 0.0002, minimum RMSE of 0.0003 and a numerically perfect match in terms of IRE. Also, as expected, the overall reflectance estimation results increase slightly in comparison to the illumination estimation. This improvement is visualized in Fig. 7, where the mean results of all metrics are plotted for the mean SPD estimation and the reflectance recovery for each inverse CCT. It can be seen that the overall trend of the curves is the same, but the error is slightly smaller in terms of SAM and significantly smaller in terms of RMSE over all inverse CCT. Since the illuminant estimation from image data without ground truth measurements is always dependent on the information present in the image, some regions of the SPD spectral range might not be properly represented by the input data. There always exists the possibility that the objects within the scene barely reflect any radiance in certain ranges of the measured spectrum. Since the illuminant information is recovered from a particular scene measurement directly, the uncertainties of the estimation results will most likely be higher for the same regions as well. However, this means in reverse that even if the illuminant can not be predicted with high accuracy over the whole spectral range, using that estimation might not introduce large errors when predicting reflectance values of the given radiance cube.

6. Conclusions and future work

This work focused on developing a framework for the precise estimation of daylight illuminant spectra from hyperspectral radiance data, eliminating the need for simultaneous ground truth measurements such as reflectance targets. The framework is particularly suited for drone-based surveillance in agricultural and forestry contexts. Distinctively, the proposed method constrains the potential illuminant spectra by establishing both image-dependent plausible spectra and physically possible spectra. The former are produced by narrowing the search space in the spectral domain based on generic assumptions, and generating spectral candidates through a random walk approach. The latter are exemplified by a dataset of measured daylight illuminants. An intersection point between these two sets is calculated within a pre-trained, input-independent Principal Component Space (PCS). This intersection point, when converted back to the spectral domain, yields the estimated illuminant.

The evaluation indicates that the proposed model excels at estimating a representative set of illuminant spectra across an extensive set of 150 input images, significantly surpassing competing statistics-based methods. A crucial assumption underpinning the method–that daylight illuminants can be estimated as an intersection between these constraints within an image-independent PCA–has been shown to deliver promising results. Specifically, illuminants corresponding to an inverse CCT of $150-200$MK$^{-1}$ are accurately predicted. Furthermore, the framework effectively recovers reflectance values from radiance input data. The model has been successfully validated for illuminants within a spectral range of $382-1002$ nm, making it apt for various vegetation-based analyses. However, its applicability is not confined to these scenarios.

The framework shows promise but also has areas for improvement. Its spectral range of 382 – 1002 nm is versatile, especially useful for applications that examine biochemical and biophysical plant parameters. Importantly, the framework is not restricted to this specific spectral range; it can be expanded to include the UV and SWIR ranges, given that sufficient training data are available to create the independent PCS. It is anticipated that such an extension would not necessitate structural changes to the existing model. Another avenue for improvement could come from the data sources. While the Granada daylight spectral database is a robust resource, particularly for daylight conditions in southern Europe, expanding the database to include illuminant measurements from different global locations could enhance the model’s estimation capabilities. In terms of computational methods, the framework could benefit from the incorporation of alternative calculation approaches, such as a nonlinear representation of the measured daylight illuminants within the PCS or gamut-based techniques. Future research should explore extending the framework to other types of illuminants, including LED and incandescent light sources for controlled agricultural environments like greenhouse farms. Finally, the current model operates under the assumption of a singular light source in the scene. Future iterations could consider more complex lighting conditions, potentially by subdividing the scene into smaller grids to perform a local illuminant estimation to obtain an illuminant map, as suggested by [62]. Another approach could be the incorporation of 3D hyperspectral data.

Funding

Universidad de Granada; Norges Teknisk-Naturvitenskapelige Universitet; Cubert GmbH.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. M. Rast and T. H. Painter, “Earth observation imaging spectroscopy for terrestrial systems: An overview of its history, techniques, and applications of its missions,” Surveys in Geophysics 40(3), 303–331 (2019). [CrossRef]  

2. B. Lu, P. Dao, J. Liu, et al., “Recent Advances of Hyperspectral Imaging Technology and Applications in Agriculture,” Remote Sens. 12(16), 2659 (2020). [CrossRef]  

3. J. Ma, D.-W. Sun, H. Pu, et al., “Advanced Techniques for Hyperspectral Imaging in the Food Industry: Principles and Recent Applications,” Annu. Rev. Food Sci. Technol. 10(1), 197–220 (2019). [CrossRef]  

4. A. Jung, “HYPERSPECTRAL IMAGING,” in Digital Techniques for Documenting and Preserving Cultural Heritage (Arc Humanities Press, 2017), pp. 217–220.

5. A. Wendel and J. Underwood, “Illumination compensation in ground based hyperspectral imaging,” ISPRS J. Photogramm. Remote. Sens. 129, 162–178 (2017). [CrossRef]  

6. J. Jablonski, C. Durell, T. Slonecker, et al., “Best practices in passive remote sensing VNIR hyperspectral system hardware calibrations,” in Hyperspectral Imaging Sensors: Innovative Applications and Sensor Standards 2016, D. P. Bannon, ed. (SPIE, 2016).

7. H. Yao and D. Lewis, “Spectral Preprocessing and Calibration Techniques,” in Hyperspectral Imaging for Food Quality Analysis and Control (Elsevier, 2010), pp. 45–78.

8. A. Koz, “Ground-Based Hyperspectral Image Surveillance Systems for Explosive Detection: Part II—Radiance to Reflectance Conversions,” IEEE J. Sel. Top. Appl. Earth Observations Remote Sensing 12(12), 4754–4765 (2019). [CrossRef]  

9. S. J. Koppal, Lambertian Reflectance (Springer, 2014), pp. 441–443.

10. G. Yang, C. Li, Y. Wang, et al., “The DOM Generation and Precise Radiometric Calibration of a UAV-Mounted Miniature Snapshot Hyperspectral Imager,” Remote Sens. 9(7), 642 (2017). [CrossRef]  

11. H. Aasen, A. Burkart, A. Bolten, et al., “Generating 3d hyperspectral information with lightweight UAV snapshot cameras for vegetation monitoring: From camera calibration to quality assurance,” ISPRS J. Photogramm. Remote. Sens. 108, 245–259 (2015). [CrossRef]  

12. J. Suomalainen, N. Anders, S. Iqbal, et al., “A Lightweight Hyperspectral Mapping System and Photogrammetric Processing Chain for Unmanned Aerial Vehicles,” Remote Sens. 6(11), 11013–11030 (2014). [CrossRef]  

13. F. Wang and A. Theuwissen, “Linearity analysis of a CMOS image sensor,” Electronic Imaging 29(11), 84–90 (2017). [CrossRef]  

14. M. Czech, G. Trumpy, and A. R. Syed, “Do-It-Yourself LUT-based Linearization of Image Sensors,” in Archiving Conference (publication pending) (2023).

15. A. Abdelbaki, M. Schlerf, R. Retzlaff, et al., “Comparison of Crop Trait Retrieval Strategies Using UAV-Based VNIR Hyperspectral Imaging,” Remote Sens. 13(9), 1748 (2021). [CrossRef]  

16. K. Uto, H. Seki, G. Saito, et al., “Characterization of Rice Paddies by a UAV-Mounted Miniature Hyperspectral Sensor System,” IEEE J. Sel. Top. Appl. Earth Observations Remote Sensing 6(2), 851–860 (2013). [CrossRef]  

17. T. Hakala, L. Markelin, E. Honkavaara, et al., “Direct Reflectance Measurements from Drones: Sensor Absolute Radiometric Calibration and System Tests for Forest Reflectance Characterization,” Sensors 18(5), 1417 (2018). [CrossRef]  

18. J. P. Arroyo-Mora, M. Kalacska, T. Løke, et al., “Assessing the impact of illumination on UAV pushbroom hyperspectral imagery collected under various cloud cover conditions,” Remote. Sens. Environ. 258, 112396 (2021). [CrossRef]  

19. H. Smithson, “Sensory, computational and cognitive components of human colour constancy,” Phil. Trans. R. Soc. B 360(1458), 1329–1346 (2005). [CrossRef]  

20. D. B. Judd, D. L. MacAdam, G. Wyszecki, et al., “Spectral Distribution of Typical Daylight as a Function of Correlated Color Temperature,” J. Opt. Soc. Am. 54(8), 1031–1040 (1964). [CrossRef]  

21. J. Hernández-Andrés, R. L. Lee, and J. Romero, “Calculating correlated color temperatures across the entire gamut of daylight and skylight chromaticities,” Appl. Opt. 38(27), 5703–5709 (1999). [CrossRef]  

22. J. Romero, J. Hernández-Andrés, J. L. Nieves, et al., “Color coordinates of objects with daylight changes,” Color Res. Appl. 28(1), 25–35 (2002). [CrossRef]  

23. N. Krüger, P. Janssen, S. Kalkan, et al., “Deep Hierarchies in the Primate Visual Cortex: What Can We Learn for Computer Vision?” IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1847–1871 (2013). [CrossRef]  

24. M. Ebner, Color Constancy (Wiley, 2007).

25. J. Seymour, “Color inconstancy in CIELAB: A red herring?” Color Res. Appl. 47(4), 900–919 (2022). [CrossRef]  

26. J. Maule, A. E. Skelton, and A. Franklin, “The Development of Color Perception and Cognition,” Annu. Rev. Psychol. 74(1), 87–111 (2023). [CrossRef]  

27. D. H. Foster, “Color constancy,” Vision Res. 51(7), 674–700 (2011). [CrossRef]  

28. G. Buchsbaum, “A spatial processor model for object colour perception,” J. Franklin Inst. 310(1), 1–26 (1980). [CrossRef]  

29. E. H. Land and J. J. McCann, “Lightness and Retinex Theory,” J. Opt. Soc. Am. 61(1), 1 (1971). [CrossRef]  

30. G. D. Finlayson and E. Trezzi, “Shades of gray and colour constancy,” in Color and Imaging Conference, vol. 2004 (Society for Imaging Science and Technology, 2004), pp. 37–41.

31. J. van de Weijer and T. Gevers, “Color constancy based on the Grey-edge hypothesis,” in IEEE International Conference on Image Processing (IEEE, 2005).

32. H. A. Khan, J.-B. Thomas, J. Y. Hardeberg, et al., “Illuminant estimation in multispectral imaging,” J. Opt. Soc. Am. A 34(7), 1085 (2017). [CrossRef]  

33. H. A. Khan, J.-B. Thomas, J. Y. Hardeberg, et al., “Spectral Adaptation Transform for Multispectral Constancy,” J. Imaging Sci. Technol. 62(2), 020504-1–020504-12 (2018). [CrossRef]  

34. C. Fredembach and G. Finlayson, “Bright Chromagenic Algorithm for Illuminant Estimation,” J. Imaging Sci. Technol. 52(4), 40906-1–40906-11 (2008). [CrossRef]  

35. S. A. Shafer, “Using color to separate reflection components,” Color Res. Appl. 10(4), 210–218 (1985). [CrossRef]  

36. S. Tominaga, “Using reflectance models for surface estimation,” in Color and Imaging Conference, vol. 1995 (Society for Imaging Science and Technology, 1995), pp. 29–33.

37. D. An, J. Suo, H. Wang, et al., “Illumination estimation from specular highlight in a multi-spectral image,” Opt. Express 23(13), 17008 (2015). [CrossRef]  

38. S. Tominaga, K. Hirai, and T. Horiuchi, “Spectral Estimation of Multiple Light Sources based on Highlight Detection,” J. Imaging Sci. Technol. 64(5), 050408-1–050408-9 (2020). [CrossRef]  

39. E. B. Goldstein, Sensation and Perception (Cengage Learning, 2010).

40. A. Banerjee, P. Burlina, and J. Broadwater, “Hyperspectral video for illumination-invariant tracking,” in First Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (IEEE, 2009).

41. H. A. Khan, J.-B. Thomas, and J. Y. Hardeberg, “Analytical Survey of Highlight Detection in Color and Spectral Images,” in Lecture Notes in Computer Science (Springer International Publishing, 2017), pp. 197–208.

42. S. Sethu, J. Devaraj, and D. Wang, “A Comprehensive Review of Deep Learning based Illumination Estimation,” Preprints.org, preprints202302.0478.v1 (2023). [CrossRef]  

43. A. Khan, A. D. Vibhute, S. Mali, et al., “A systematic review on hyperspectral imaging technology with a machine and deep learning methodology for agricultural applications,” Ecol. Informatics 69, 101678 (2022). [CrossRef]  

44. A. Ozdemir and K. P. and, “Deep Learning Applications for Hyperspectral Imaging: A Systematic Review,” J. Inst. Electron. Comput. 2(1), 39–56 (2020). [CrossRef]  

45. F. Grillini, J.-B. Thomas, and S. George, “Comparison of Imaging Models for Spectral Unmixing in Oil Painting,” Sensors 21(7), 2471 (2021). [CrossRef]  

46. A. Gijsenij, T. Gevers, and J. van de Weijer, “Computational Color Constancy: Survey and Experiments,” IEEE Trans. on Image Process. 20(9), 2475–2489 (2011). [CrossRef]  

47. A. Robles-Kelly and R. Wei, “A Convolutional Neural Network for Pixelwise Illuminant Recovery in Colour and Spectral Images,” in 24th International Conference on Pattern Recognition (IEEE, 2018).

48. J. Snoek, H. Larochelle, and R. P. Adams, “Practical Bayesian Optimization of Machine Learning Algorithms,” in 25th International Conference on Neural Information Processing Systems - Volume 2 (Curran Associates Inc., 2012), pp. 2951–2959.

49. R. Leamer and J. Noriega, “Reflectance brightness measured over agricultural areas,” Agric. Meteorol. 23, 1–8 (1981). [CrossRef]  

50. J. Hernández-Andrés, J. Romero, and R. L. Lee, “Colorimetric and spectroradiometric characteristics of narrow-field-of-view clear skylight in Granada, Spain,” J. Opt. Soc. Am. A 18(2), 412 (2001). [CrossRef]  

51. K. Pearson, “The problem of the random walk,” Nature 72(1865), 294 (1905). [CrossRef]  

52. W. S. Cleveland, “Robust Locally Weighted Regression and Smoothing Scatterplots,” J. Am. Stat. Assoc. 74(368), 829–836 (1979). [CrossRef]  

53. Z. Kosztyán and J. Schanda, “Smoothing spectral power distribution of daylights,” Color Res. Appl. 38(5), 316–321 (2012). [CrossRef]  

54. M. A. Fischler and R. C. Bolles, “Random sample consensus,” Commun. ACM 24(6), 381–395 (1981). [CrossRef]  

55. C. B. Barber, D. P. Dobkin, and H. Huhdanpaa, “The quickhull algorithm for convex hulls,” ACM Trans. Math. Softw. 22(4), 469–483 (1996). [CrossRef]  

56. J. Viggiano, “Metrics for evaluating spectral matches: a quantitative comparison,” in Conference on Colour in Graphics, Imaging, and Vision, vol. 2004 (Society for Imaging Science and Technology, 2004), pp. 286–291.

57. C. S. McCamy, “Correlated color temperature as an explicit function of chromaticity coordinates,” Color Res. Appl. 17(2), 142–144 (1992). [CrossRef]  

58. R. W. G. Hunt, “Colour terminology,” Color Res. Appl. 3(2), 79–87 (1978). [CrossRef]  

59. G. Wyszecki and W. S. Stiles, Color Science: Concepts and Methods, Quantitative Data and Formulae, vol. 40 (John Wiley & Sons, 2000).

60. J. Hernández-Andrés, J. Romero, J. L. Nieves, et al., “Color and spectral analysis of daylight in southern europe,” J. Opt. Soc. Am. A 18(6), 1325 (2001). [CrossRef]  

61. J. Romero, A. García-Beltrán, and J. Hernández-Andrés, “Linear bases for representation of natural and artificial illuminants,” J. Opt. Soc. Am. A 14(5), 1007 (1997). [CrossRef]  

62. A. Gijsenij, R. Lu, and T. Gevers, “Color Constancy for Multiple Light Sources,” IEEE Trans. on Image Process. 21(2), 697–707 (2012). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. minConstraint and maxConstraint define the search area for the unknown illuminant SPD.
Fig. 2.
Fig. 2. The red line is fitted using RANSAC on the illuminant dataset (green dots), the plane representing and the spectral candidates are shown in gray. The intersection point is marked as red square, whereas the ground truth is shown as red ‘X’.
Fig. 3.
Fig. 3. Example images from the test dataset, rendered as sRGB images using CIE standard illuminant D65 and CIE 1964 color matching functions.
Fig. 4.
Fig. 4. Chosen SPD of the Granada daylight spectral dataset based on their CCT$^{-1}$.
Fig. 5.
Fig. 5. Individual reconstruction results, one for each chosen CCT$^{-1}$. The blue curve represents the ground truth illuminant used for creating the radiance image, the orange line is the reconstructed illuminant from three principal components using the proposed estimation model.
Fig. 6.
Fig. 6. Trendlines for the mean estimation accuracy of the proposed model per CCT$^{-1}$ of all evaluation metrics.
Fig. 7.
Fig. 7. Comparison of trendlines for the mean results in terms of illuminant SPD estimation and reflectance recovery of the proposed model per CCT$^{-1}$ of all evaluation metrics; first three principal components used.

Tables (3)

Tables Icon

Table 1. Results of the estimation of the illuminant SPD using three components for reconstruction from PCS, statistical values for all 150 estimated illuminants of the test dataset.

Tables Icon

Table 2. Results of the illuminant estimation, subdivided by the CCT 1 of the ground truth SPD.

Tables Icon

Table 3. Results of the relative reflectance reconstruction. The results are calculated using all 45.3 million spectra of the 150 test cubes.

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.