Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Color reproduction pipeline for an RGBW color filter array sensor

Open Access Open Access

Abstract

Many types of RGBW color filter array (CFA) have been proposed for various purposes. Most studies utilize white pixel intensity for improving the signal-to-noise ratio of the image and demosaicing the image, but we note that the white pixel intensity can also be utilized to improve color reproduction. In this paper, we propose a color reproduction pipeline for RGBW CFA sensors based on a fast, accurate, and hardware-friendly gray pixel detection using white pixel intensity. The proposed color reproduction pipeline was tested on a dataset captured from an OPA sensor which has RGBW CFA. Experimental results show that the proposed pipeline estimates the illumination more accurately and preserves the achromatic color better than conventional methods which do not use white pixel intensity.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Many types of color filter arrays (CFA) have recently been proposed [15] to improve image quality. As one of these efforts, the white pixel, which is a transparent filter element, is included in CFA [68] shown in Fig. 1. The wide spectral characteristics of the white pixel are utilized not only to enhance the brightness and reflectance in displays [9,10], but also to increase the signal-to-noise ratio of the color image in cameras [11]. In addition, the white pixel with integrated micro-aperture is used to obtain the color image and depth-related disparity simultaneously in a single-lens imaging system [1214]. We noted that white pixels can also be used to improve image color reproduction.

 figure: Fig. 1.

Fig. 1. Various color filter array (CFA) patterns. (a) Bayer CFA [5] which is widely used in digital cameras. (b) Sony RGBW CFA [6]. (c) Bayer-like RGBW CFA [8]. (d) Offset pixel aperture (OPA) RGBW CFA [14] whose white pixel is covered with the micro-aperture.

Download Full Size | PDF

The conventional color reproduction pipeline consists of two steps: white balance and color correction. White balance is motivated by the color constancy ability of the human visual system to perceive the object color as approximately constant regardless of the color of the light source [15]. The goal of the computational color constancy is eliminating the color cast triggered by incident light. In order to render the image under the white light source regardless of the incident illumination, most color constancy methods first estimate the scene illumination from the color-biased image captured from the image sensor. They then correct the image by applying the transform matrix calculated from the color of the estimated illuminant. Various approaches for illumination estimation are mainly divided into statistical methods [1618] and learning-based methods [1921]. Although learning-based approaches show remarkable performance, training data dependency [22], expensive hardware requirement, and slow running speed make it difficult to use in practical applications. Based on the hypothesis that most of the natural images contain detectable gray pixels, gray pixel detection approaches [23,24] estimate the illumination quickly and as accurately as the learning-based approach. However, since these methods use a local window to detect gray pixels, it is difficult to extract reliable gray pixels in uniform regions and boundaries of different color surfaces. In addition, the gray index sorting algorithm, which is used in previous works to select the reliable top n% gray pixels for every frame, increases hardware cost.

The objective of color correction is to transform the device-dependent color space into the device-independent color space. Since the spectral sensitivity of the image sensor is usually different from that of the desired color, color correction is essential step in the color reproduction. Among many proposed algorithms [2529], least square regression approaches [3032] are widely used due to their low computational complexity. White color preservation is one of the most important features of color reproduction. However, the color correction matrix obtained by least square regression causes the white color to change as it minimizes the colorimetric error of all calibration color. A constrained least square regression [33,34] is proposed with a constraint to map the selected white color exactly. Although the constrained least square regression performs well for a hypothetical set that contains all surface reflectance, the colorimetric error increases when the input image does not contain sufficient white reflectance.

In this paper, we propose a color reproduction pipeline for RGBW sensors, where the limitations of white balance and color correction are resolved by using the white pixel intensity. The contributions of the proposed pipeline are as follows.

  • • We propose a fast, accurate, and hardware-friendly gray pixel detection algorithm for the RGBW sensors. Since the proposed algorithm does not use the local window, a reliable gray pixel can be detected even in the uniform region. Instead of using the computationally heavy sorting algorithm, the properties of the Skellam distribution are used to efficiently distinguish reliable gray pixels. To the best of our knowledge, the proposed algorithm is the first attempt to use white pixel intensity for illuminant estimation.
  • • We propose an adaptive white preserved color correction that tunes the color correction matrix according to the gray index of each pixel calculated from the gray pixel detection algorithm. The tuning of the color correction matrix is performed by a weighted summation of color correction matrices with the gray index as a weight factor. Such color correction matrix tuning preserves the achromatic color while keeping the chromatic color from deteriorating.
The rest of the paper is structured as follows. The proposed gray pixel detection algorithm is explained in Section 2, the proposed color reproduction pipeline is described in Section 3, experimental results are discussed in Section 4, and the limitation and future work are discussed in Section 5. The paper is concluded in Section 6.

2. Gray pixel detection using the white pixel intensity

The achromatic region provides an important cue to estimate illumination as it reflects the color of the incident light of the image. If the achromatic region is extracted from the color-biased image, the scene illumination can be accurately estimated.

Yang et al. verified the hypothesis that most of the natural images in the real world contain detectable gray pixels, which can be used for illuminant estimation [23,24]. Based on this hypothesis, previous works defined an illuminant-invariant measure (IIM) using a local contrast or local gradient. These methods may not be able to detect reliable gray pixels that are isolated or located within a uniform region as the IIM is calculated within the local window. To solve the problem, we defined IIM using white pixel intensity in the RGBW sensor. The mathematical derivation is as follows.

The pixel intensity ${I_i}(x,y) \in \{ {I_R},{I_G},{I_B},{I_W}\} $ at $(x,y)$ can be represented as the dichromatic reflection model [35,36] as follows:

$${I_i}(x,y) = {m_b}(x,y)\int {E(} \lambda ){S_i}(\lambda ){R_b}(x,y,\lambda )d\lambda + {m_s}(x,y)\int {E(} \lambda ){S_i}(\lambda ){R_s}(x,y,\lambda )d\lambda ,$$
where $E(\lambda )$ is the illuminant spectral power distribution, ${S_i}(\lambda )$ is the spectral sensitivities of the sensor, ${R_b}(x,y,\lambda )$ is the body reflectance, and ${R_s}(x,y,\lambda )$ is the specular reflectance, ${m_b}(x,y)$ is the body geometric scale factor, ${m_s}(x,y)$ is the specular geometric scale factor, and $\lambda$ is the wavelength.

The goal of the white balance is to estimate $E(\lambda )$ from ${I_i}(x,y)$, and render the image under the white light source. However, this is a problem since the only thing that we can get from the image sensor is the pixel intensity that changes not only by the spectral distribution of the illuminant but also by the reflectance of the object. Thus, additional assumptions are needed to solve the problem.

Although the pixel intensity depends on specular reflection, numerous illuminant estimation algorithms ignore it for simplicity based on the Lambertian reflectance model [3739] as follows:

$${I_i}(x,y) = m(x,y)\int {E(} \lambda ){S_i}(\lambda )R(x,y,\lambda )d\lambda ,$$
where $m(x,y)$ is the Lambertian shading, and $R(x,y,\lambda )$ is the surface reflectance. In practice, it is impossible to estimate the continuous functions of the illuminant by using the pixel intensity which is integrated with the wavelength [18,40,41]. The von Kries coefficient law [42] is adopted in numerous literatures [4346] to transform (2) into a simplified diagonal model as
$${I_i}(x,y) = {E_i}(x,y){R_i}(x,y),$$
where ${E_i}(x,y)$ is the diagonal matrix of illumination and ${R_i}(x,y)$ is the reflectance. In logarithmic space, the pixel intensity is expressed as the summation of the logarithms of the ${E_i}(x,y)$ and ${R_i}(x,y)$ as
$$\log {I_i}(x,y) = \log ({E_i}(x,y) \cdot {R_i}(x,y)) = \log {E_i}(x,y) + \log {R_i}(x,y).$$
Suppose the illuminant is uniform within R, G, B, and W pixels at the same position $(x,y)$. Once we estimate the white pixel intensity from R, G, and B pixels as ${I^{\prime}_W}(x,y)$, the difference between $\log {I_W}(x,y)$ and $\log {I^{\prime}_W}(x,y)$ is independent of the illuminant ${E_i}(x,y)$ in logarithmic space, which can be expressed as
$$\Delta \log {I_W}(x,y) = |{\log {I_W}(x,y) - \log {{I^{\prime}}_W}(x,y)} |= |{\log {R_W}(x,y) - \log {{R^{\prime}}_W}(x,y)} |.$$
Since $\Delta \log {I_W}(x,y)$ is independent of the illuminant, it can be used as an IIM as shown in Fig. 2(e). Based on the spectral correlation among the R, G, B, and W pixel intensities, the estimated white pixel intensity ${I^{\prime}_W}(x,y)$ is calculated under the assumption [47] that there is a linear relationship between R, G, B and W pixel intensities. The offset term is also included to compensate for the spectral mismatch of color filter array [48] as follows:
$${I^{\prime}_W}(x,y) = {\hat{k}_R}{I_R}(x,y) + {\hat{k}_G}{I_G}(x,y) + {\hat{k}_B}{I_B}(x,y) + {\hat{k}_O},$$
where ${\hat{k}_R}$, ${\hat{k}_G}$, ${\hat{k}_B}$, and ${\hat{k}_O}$ are white estimation coefficients that were obtained by least square regression.

 figure: Fig. 2.

Fig. 2. The heat maps of the gray index extracted from various algorithms. (a) Input image of the Macbeth ColorChecker. (b) Ground truth of the gray index (c) Estimated gray index of color constancy using gray pixels [23]. (d) Estimated gray index of improved color constancy using gray pixels [24]. (e) Estimated gray index using $\Delta \log {I_W}(x,y).$ (f) Estimated gray index of the proposed method.

Download Full Size | PDF

In order to distinguish the reliable IIM at each pixel, the standard deviation of IIM is considered. It is known that the pixel intensity is determined by the number of photons which follows the laws of quantum physics [49]. As the probability distribution of the arrival of photons on a pixel follows the Poisson distribution, the probability distribution of $\Delta \log {I_W}(x,y)$ is the probability distribution of the difference of two random variables which are the logarithm of the Poisson distribution. Since it is too complicated to calculate the standard deviation of the $\Delta \log {I_W}(x,y)$, the difference of ${I_W}(x,y)$ and ${I^{\prime}_W}(x,y)$ is designated as IIM under the assumption that $\Delta {I_W}(x,y)$ also reasonably eliminates the effect of the illuminant. The IIM for estimating gray pixels is calculated as follows:

$$\Delta {I_W}(x,y) = {I_W}(x,y) - {I^{\prime}_W}(x,y).$$
Since the IIM is calculated at each pixel, it can be used in a uniform region, boundary region, single illuminant environment, and multi-illuminant environment. Then, the probability distribution of IIM follows the Skellam distribution [50,51] because the difference between two Poisson random variables is defined as the Skellam distribution, which can be expressed as
$$f(k;{\mu _1},{\mu _2}) = {e^{ - ({\mu _1} + {\mu _2})}}{\left( {\frac{{{\mu_1}}}{{{\mu_2}}}} \right)^{\frac{k}{2}}}{I_k}\left( {2\sqrt {{\mu_1}{\mu_2}} } \right),$$
where ${\mu _1}$ and ${\mu _2}$ are the means of the two Poisson distributions which are the white pixel intensity and the estimated white pixel intensity. ${I_k}(z)$ is the modified Bessel function of the first kind. The mean ${\mu _s}$ and the standard deviation ${\sigma _s}$ of the Skellam distribution is given by
$${\mu _s} = {\mu _1} - {\mu _2},$$
$${\sigma _s} = \sqrt {{\mu _1} + {\mu _2}} .$$
With the property of the Skellam distribution, a gray index $GI(x,y)$ is defined to measure the grayness of each pixel as
$$GI(x,y) = \left\{ {\begin{array}{c} {1 - \frac{{|{\Delta {I_W}(x,y)} |}}{{c{\sigma_s}}}\;\;\;\;\;\;\;\;\;\;\;\textrm{if}\;\Delta {I_W}(x,y)\; < \;c{\sigma_s}}\\ {\;\;\;\;\;0\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\,\;\;\;\;\;\;\;\;\textrm{otherwise}} \end{array}} \right.,$$
where c is a threshold coefficient. According to the level of the pixel intensity, the threshold of $GI$ varies adaptively. The larger the $GI$, the higher the probability of being the gray pixel. To reduce noise and get a reliable $GI$, an average filter is applied to the $GI$ as
$$G{I^\ast }(x,y) = A{F_5}\{ GI(x,y)\} ,$$
where $A{F_5}\{{\cdot} \} $ is the $5 \times 5$ averaging filter. As shown in Fig. 2(f), the proposed $G{I^\ast }$ can robustly distinguish achromatic color patches and border lines of the Macbeth ColorChecker as gray pixels. In addition, the proposed $G{I^\ast }$ is easily implemented in hardware with a 48 k NAND2 gate count and 9.5 kB SRAM as it does not require the sorting algorithm used in previous works [23,24] to select reliable top n% gray pixels for every frame.

3. Color reproduction with gray pixel

3.1 White balance using the gray pixel detection

The color of the illuminant can be estimated from the $G{I^\ast }$ which has illuminant characteristics. According to the $G{I^\ast }$ definition, the $G{I^\ast }$ magnitude of the achromatic color is close to one. The estimated illuminant color ${e_j}$ is calculated as

$${e_j} = \frac{1}{N}\sum\nolimits_{x,y} {{I_j}(x,y)G{I^\ast }(x,y),\;\;\;\;j \in \{ R,G,B\} } ,$$
where N is the number of the non-zero $G{I^\ast }$. The white balance is performed for the pixel value in the input image by
$$I_j^{WB}(x,y) = {I_j}(x,y)\frac{{{e_R} + {e_G} + {e_B}}}{{3{e_j}}},\;\;\;\;j \in \{ R,G,B\} ,$$
where $I_j^{WB}$ is the white-balanced pixel value.

3.2 Adaptive color correction considering grayness

For a $ 3 \times \textrm{1}$ input vector ${\textbf c}$ formed by R, G, and B sensor responses, the linear color correction transform is typically performed by

$${\textbf t} = {\textbf Mc},$$
where ${\textbf t}$ is a $3 \times \textrm{1}$ vector of the known standard RGB values that we want to restore, and ${\textbf M}$ is a $3 \times 3$ color correction matrix. Unlike typical cases, the elements of ${\textbf c}$ are replaced with white-balanced pixel values, not sensor responses because the white balance is followed by color correction in our color reproduction pipeline. For a set of N training patches, we denote the $3 \times N$ target color matrix and the $3 \times N$ white-balanced color as ${\textbf T}$ and ${\textbf C}$, respectively. The color correction matrix is generally calculated by least square regression as
$$\widehat {\textbf M} = \arg \mathop {\min }\limits_{\textbf M} ||{{\textbf T} - {\textbf MC}} ||_F^2,$$
where ${||\cdot ||_F}$ is the Frobenius norm. Since the least square regression finds optimal $\widehat {\textbf M}$ to minimize the colorimetric error for all target color, the white color error, which is important in color reproduction, may increase.

The objective of the adaptive color correction considering grayness is to map the achromatic color with low colorimetric error using gray pixel detection, and to keep the chromatic color from deteriorating. A dedicated color correction matrix $\widehat {{{\textbf M}_a}}$ for a sub-dataset containing only achromatic color patches is calculated by least square regression. The adaptive white preserved color correction matrix $\widehat {{{\textbf M}_{wp}}}$ is calculated as

$$\widehat {{{\textbf M}_{wp}}} = w \cdot G{I^\ast }\widehat {{{\textbf M}_a}} + (1 - w \cdot G{I^\ast })\widehat {\textbf M},$$
where w is the weighting factor of the gray index. Through the weighted color correction matrices, the white-preserved color correction is performed adaptively for each color. Moreover, the proposed adaptive white preserved color correction is able to utilize not only the linear color correction (LCC) but also other types of color correction such as the polynomial color correction (PCC) [30], the root polynomial color correction (RPCC) [31], and the 3 × 4 color correction matrix (3 × 4 CCM) [52]. More details regarding these types of color corrections can be found in Section 4.2.

4. Experimental results

Conventional datasets such as the Gehler-Shi [53,54], the SFU [55,56], and the NUS [57] cannot be used to evaluate the proposed color reproduction pipeline since they do not have the white pixel intensity. Therefore, we captured a dataset using the full color OPA sensor [14] whose color filter array is RGBW. The unit pixel size of the OPA sensor is $\textrm{2}\textrm{.8} \times \textrm{2}\textrm{.8 }\mathrm{\mu} {\textrm{m}^2}$ and the OPA sensor was fabricated using a $\textrm{0}\textrm{.11 }\mathrm{\mu} \textrm{m}$ CIS process. In the experiment, we used the raw OPA sensor images whose resolution is $\textrm{1544} \times \textrm{1100,}$ and a lens system with 6 mm focal length and 1.4 F-number. The X-Rite Macbeth ColorChecker Classic and X-Rite ColorChecker White Balance were used to evaluate the performance. The color difference $\Delta E_{ab}^\ast $ in CIELAB color space and the recovery angular error ${E_{rec}}$ are used as the evaluation metric. The recovery angular error ${E_{rec}}$ is defined as

$${E_{rec}} = {\cos ^{ - 1}}\left( {\frac{{{{\textbf e}_{gt}}\cdot {{\textbf e}_{est}}}}{{||{{{\textbf e}_{gt}}} ||||{{{\textbf e}_{est}}} ||}}} \right),$$
where $^{\prime}\cdot ^{\prime}$ represents the vector dot product, ${{\textbf e}_{gt}}$ is the normalized RGB value of the ground truth, and ${{\textbf e}_{est}}$ is the normalized RGB value of the estimated illuminant color.

4.1 White balance using the gray pixel detection

The proposed white balance method is compared with Gray-World (GW) [16], Shadow of Gray (SoG) [17], first-order Gray-Edge (GE1), second-order Gray-Edge (GE2) [18], Weighted Grey-Edge (WGE) [45], and color constancy using grey pixels (GP15 [23], GP18 [24]).

In the proposed method, there is one variable parameter c, which determines the threshold of the standard deviation of the Skellam distribution to detect reliable gray pixels. As shown in Fig. 3(a), the angular error was calculated by changing c in the dataset to determine the optimal parameter c. The proposed method shows stable performance when c is between 0.001 and 0.1. c = 0.001 was used in the following experiments because it shows the best performance. The parameter n of GP15 and GP18, which is the percentage of detected gray pixels, was determined to be 10% as the minimal angular error is obtained there as shown in Fig. 3(b).

 figure: Fig. 3.

Fig. 3. The influence of the variable parameters on the white balance performance in our dataset. (a) Relationship between the angular error and the parameter c of the proposed method. (b) Relationship between the angular error and the parameter n% of the GP15.

Download Full Size | PDF

First, we evaluated various white balance algorithms with achromatic color patches of the Macbeth ColorChecker under various illuminants produced by the X-Rite SpectraLight QC light booth. The results of this experiment are listed in Table 1. The smallest average angular error for all illuminants, an error of 0.991$^\circ $, is obtained with the proposed method. Figure 4 shows the visual results of each algorithm for achromatic color patches of the Macbeth ColorChecker.

 figure: Fig. 4.

Fig. 4. Visual results of each algorithm for achromatic color patches of the Macbeth ColorChecker. (a) The results under 6500 K illuminant. (b) The results under 4000 K illuminant. (c) The results under 3500 K illuminant. (d) The results under 2856 K illuminant.

Download Full Size | PDF

Tables Icon

Table 1. The angular errors of each algorithm for the gray patches of the Macbeth ColorCheck under various illuminations.

Next, we evaluated the performance in the OPA sensor image dataset, which consists of 42 images captured indoors such as classroom, hallway, bookshelf, stairs, and poster board, and 62 images captured with various objects such as office supplies, cups, dolls, and miniatures under 4 different illuminant sources (6500 K, 4000 K, 3500K, and 2856 K), as shown in Fig. 5. For the estimation of the actual illuminant color of the scene, every image contains the white board which is masked out when the illuminant estimation is in processing. The exact position of the white board was manually labeled. The results of the proposed method and other methods are listed in Table 2. For both the mean angular error and the mean color difference, the proposed method outperforms other methods. Compared with the second-best methods, the mean angular error and the mean color difference of the proposed method are 20.6% and 27.7% lower, respectively. Figure 6 shows the visual results of each method applied to the image of the dataset.

 figure: Fig. 5.

Fig. 5. Examples of the of the dataset captured from OPA sensor [14] which has RGBW CFA.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Visual results of each method applied to the image of the dataset. The angular error is displayed at the bottom left of each image.

Download Full Size | PDF

Tables Icon

Table 2. Performance of various method and the proposed method on the OPA sensor image dataset.

4.2 Adaptive color correction considering grayness

The proposed adaptive color correction considering grayness method is compared with the white-point preserving least square (WPPLS) regression method [34] which shares the same goal of preserving white. Moreover, since the proposed method is able to be applied to LCC, PCC [30], RPCC [31], and 3 × 4 CCM [52], we compared the results with and without the proposed method applied.

WPPLS finds the optimal color correction matrix with the constraint that a particular surface reflectance is mapped without error. In this experiment, the 19th patch of the Macbeth ColorChecker is used as the constraint surface reflectance of WPPLS. The color correction matrices of WPPLS and the proposed method are calculated based on the LCC. The angular errors are compared among the LCC, WPPLS, and the proposed method in Fig. 7. Although the angular error of WPPLS was the smallest for the 9th, 18th to 20th patches, the angular error of the proposed algorithm was the smallest for most of the other patches, including achromatic color patches.

 figure: Fig. 7.

Fig. 7. Comparison of the mean angular error for each patch of the Macbeth ColorChecker.

Download Full Size | PDF

The results for achromatic color patches (19th to 24th) and chromatic color patches (1st to 18th) of each method are summarized in Table 3. The mean and median angular errors of LCC with the proposed method are the lowest in all cases. As the white preservation methods focus on achromatic color preservation, which is important in color reproduction, they increase chromatic color errors by definition [34]. Given the importance of the white, the results of the color difference show that the proposed method preserves the achromatic color and prevents the chromatic color degradation. The visual comparison of the color correction results for the Macbeth ColorChecker is shown in Fig. 8.

 figure: Fig. 8.

Fig. 8. Visual results of color correction for each patch in Macbeth ColorChecker. The average angular error of the proposed method is the lowest compared with other methods.

Download Full Size | PDF

Tables Icon

Table 3. The results of the angular error and the color difference for achromatic color patches and chromatic color patches in the Macbeth ColorChecker.

In addition, we evaluated the performance of the LCC, PCC, RPCC, and 3 × 4 CCM with and without the proposed adaptive white preserving approach. The results for achromatic color patches and chromatic color patches are listed in Table 4. For achromatic color patches, the angular error and the color difference of the proposed method are lower in all cases than those using other methods. In summary, we can obtain the lowest errors for achromatic colors with the proposed method, while keeping the chromatic colors from deteriorating.

Tables Icon

Table 4. The results of the color correction methods with and without the proposed adaptive white preserving color correction.

5. Limitation and future work

The proposed white balance performs based on the validated hypothesis [23] that most of the natural images in the real world contain detectable gray pixels. As an extreme case, if there are no gray regions or there are no reliable gray pixels in the image, the illuminant estimation accuracy will be worse. To model this situation, the test images captured on the red solid background were used for evaluation. The results of various algorithms are compared in Fig. 9. Since the white board was reflected on the floor, the right side of the images, including the white board and its reflection, was masked out during the illuminant estimation. As compared to the angular error for the dataset images shown in Fig. 6, the angular error of each algorithm is quite increased as shown in Fig. 9. Despite challenging conditions, the angle error of the proposed method was the lowest among the competitors.

 figure: Fig. 9.

Fig. 9. Results of each white balance method on images captured on the red solid background. The angular error is displayed at the bottom left of each image.

Download Full Size | PDF

The OPA sensor dataset used in the experiment was taken under the single illumination condition. The illuminant estimation using the proposed method under the multiple-illumination condition is a subject of future work.

6. Conclusion

In this paper, we have proposed a color reproduction pipeline for RGBW sensors based on newly proposed gray pixel detection using white pixel intensity. A fast, accurate, and hardware-friendly illuminant-invariant measure for gray pixel detection was developed using the white pixel intensity. The proposed gray pixel detection algorithm can detect reliable gray pixels even in a uniform region, and be implemented in hardware with a 48 k NAND2 gate count and 9.5 kB SRAM. To the best of our knowledge, the proposed white balance method is the first attempt to use white pixel intensity for illuminant estimation. Experimental results show that the proposed pipeline obtains better results in terms of illumination estimation accuracy and achromatic color preservation than the compared methods.

Funding

Center for Integrated Smart Sensors funded by the Ministry of Science and ICT, South Korea (CISS-2013M3A6A6073718); Samsung (G01180228).

Disclosures

The authors declare no conflicts of interest.

References

1. R. H. Kröger, “Anti-aliasing in image recording and display hardware: lessons from nature,” J. Opt. A: Pure Appl. Opt. 6(8), 743–748 (2004). [CrossRef]  

2. R. Lukac and K. N. Plataniotis, “Color filter arrays: Design and performance analysis,” IEEE Trans. Consumer Electronics 51(4), 1260–1267 (2005). [CrossRef]  

3. Y. Monno, S. Kikuchi, M. Tanaka, and M. Okutomi, “A practical one-shot multispectral imaging system using a single image sensor,” IEEE Trans. Image Process. 24(10), 3048–3059 (2015). [CrossRef]  

4. J. Couillaud, A. Horé, and D. Ziou, “Nature-inspired color-filter array for enhancing the quality of images,” J. Opt. Soc. Am. A 29(8), 1580–1587 (2012). [CrossRef]  

5. B. E. Bayer, “Color imaging array,” U.S. patent 3,971,065 (1976).

6. I. Hirota, “Solid-state imaging device, method for processing signal of solid-state imaging device, and imaging apparatus,” U.S. patent 8,436,925 (2013).

7. J. T. Compton and J. F. Hamilton Jr, “Image sensor with improved light sensitivity,” U.S. patent 8,139,130 (2005).

8. H. Honda, Y. Iida, G. Itoh, Y. Egawa, and H. Seki, “A novel Bayer-like WRGB color filter array for CMOS image sensors,” in Human Vision and Electronic Imaging XII (2007), p. 64921J.

9. Y. Kwak, J. Park, and D.-S. Park, “Generating vivid colors on red-green-blue-white electonic-paper display,” Appl. Opt. 47(25), 4491–4500 (2008). [CrossRef]  

10. Y. Xiong, L. Wang, W. Xu, J. Zou, H. Wu, Y. Xu, J. Peng, J. Wang, Y. Cao, and G. Yu, “Performance analysis of PLED based flat panel display with RGBW sub-pixel layout,” Org. Electron. 10(5), 857–862 (2009). [CrossRef]  

11. S. Jee, K. Song, and M. Kang, “Sensitivity and resolution improvement in RGBW color filter array sensor,” Sensors 18(5), 1647 (2018). [CrossRef]  

12. B.-S. Choi, S.-H. Kim, J. Lee, D. Seong, J.-K. Shin, S. Chang, J. Park, and S.-J. Lee, “CMOS image sensor for extracting depth information using pixel aperture technique,” in 2018 IEEE International Instrumentation and Measurement Technology Conference (2018), pp. 1–5.

13. J. Lee, B.-S. Choi, S.-H. Kim, J. Lee, J. Lee, S. Chang, J. Park, S.-J. Lee, and J.-K. Shin, “Effects of Offset Pixel Aperture Width on the Performances of Monochrome CMOS Image Sensors for Depth Extraction,” Sensors 19(8), 1823 (2019). [CrossRef]  

14. B.-S. Choi, J. Lee, S.-H. Kim, S. Chang, J. Park, S.-J. Lee, and J.-K. Shin, “Analysis of Disparity Information for Depth Extraction Using CMOS Image Sensor with Offset Pixel Aperture Technique,” Sensors 19(3), 472 (2019). [CrossRef]  

15. L. T. Maloney and B. A. Wandell, “Color constancy: a method for recovering surface spectral reflectance,” J. Opt. Soc. Am. A 3(1), 29–33 (1986). [CrossRef]  

16. G. Buchsbaum, “A spatial processor model for object colour perception,” J. Franklin Inst. 310(1), 1–26 (1980). [CrossRef]  

17. G. D. Finlayson and E. Trezzi, “Shades of gray and colour constancy,” in Color and Imaging Conference (2004), pp. 37–41.

18. J. Van De Weijer, T. Gevers, and A. Gijsenij, “Edge-based color constancy,” IEEE Trans. Image Process. 16(9), 2207–2214 (2007). [CrossRef]  

19. H. R. V. Joze and M. S. Drew, “Exemplar-based color constancy and multiple illumination,” IEEE Trans. Pattern Anal. Mach. Intell. 36(5), 860–873 (2014). [CrossRef]  

20. A. Gijsenij and T. Gevers, “Color constancy using natural image statistics and scene semantics,” IEEE Trans. Pattern Anal. Mach. Intell. 33(4), 687–698 (2011). [CrossRef]  

21. A. Gijsenij, T. Gevers, and J. Van De Weijer, “Generalized gamut mapping using image derivative structures for color constancy,” Int. J. Comput. Vis. 86(2-3), 127–139 (2010). [CrossRef]  

22. S.-B. Gao, M. Zhang, C.-Y. Li, and Y.-J. Li, “Improving color constancy by discounting the variation of camera spectral sensitivity,” J. Opt. Soc. Am. A 34(8), 1448–1462 (2017). [CrossRef]  

23. K.-F. Yang, S.-B. Gao, and Y.-J. Li, “Efficient illuminant estimation for color constancy using grey pixels,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 2254–2263.

24. X. Yang, X. Jin, and J. Zhang, “Improved single-illumination estimation accuracy via redefining the illuminant-invariant descriptor and the grey pixels,” Opt. Express 26(22), 29055–29067 (2018). [CrossRef]  

25. P.-C. Hung, “Colorimetric calibration in electronic imaging devices using a look-up-table model and interpolations,” J. Electron. Imaging 2(1), 53–62 (1993). [CrossRef]  

26. J. S. McElvain and W. Gish, “Camera color correction using two-dimensional transforms,” in Color and Imaging Conference (2013), pp. 250–256.

27. P.-C. Hung, “Color rendition using three-dimensional interpolation,” Imaging Applications in the Work World 0900, 111–115 (1988). [CrossRef]  

28. H. R. Kang and P. G. Anderson, “Neural network applications to the color scanner and printer calibrations,” J. Electron. Imaging 1(2), 125–136 (1992). [CrossRef]  

29. V. Cheung, S. Westland, D. Connah, and C. Ripamonti, “A comparative study of the characterisation of colour cameras by means of neural networks and polynomial transforms,” Color. Technol. 120(1), 19–25 (2004). [CrossRef]  

30. G. Hong, M. R. Luo, and P. A. Rhodes, “A study of digital camera colorimetric characterization based on polynomial modeling,” Color Res. Appl. 26(1), 76–84 (2001). [CrossRef]  

31. G. D. Finlayson, M. Mackiewicz, and A. Hurlbert, “Color correction using root-polynomial regression,” IEEE Trans. Image Process. 24(5), 1460–1470 (2015). [CrossRef]  

32. S. Lim and A. Silverstein, “Spatially varying color correction (SVCC) matrices for reduced noise,” in Color and Imaging Conference (2004), pp. 76–81.

33. G. D. Finlayson and M. S. Drew, “White-point preserving color correction,” in Color and Imaging Conference (1997), pp. 258–261.

34. G. D. Finlayson and M. S. Drew, “Constrained least-squares regression in color spaces,” J. Electron. Imaging 6(4), 484–494 (1997). [CrossRef]  

35. S. A. Shafer, “Using color to separate reflection components,” Color Res. Appl. 10(4), 210–218 (1985). [CrossRef]  

36. G. J. Klinker, S. A. Shafer, and T. Kanade, “A physical approach to color image understanding,” Int. J. Comput. Vis. 4(1), 7–38 (1990). [CrossRef]  

37. M. Oren and S. K. Nayar, “Generalization of Lambert's reflectance model,” in Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques (1994), pp. 239–246.

38. W. Shi, C. C. Loy, and X. Tang, “Deep specialized network for illuminant estimation,” in European Conference on Computer Vision (2016), pp. 371–387.

39. G. D. Finlayson, S. D. Hordley, and P. M. Hubel, “Color by correlation: A simple, unifying framework for color constancy,” IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1209–1221 (2001). [CrossRef]  

40. A. Gijsenij, T. Gevers, and J. Van De Weijer, “Computational color constancy: Survey and experiments,” IEEE Trans. Image Process. 20(9), 2475–2489 (2011). [CrossRef]  

41. G. D. Finlayson and R. Zakizadeh, “Reproduction angular error: An improved performance metric for illuminant estimation,” in Proceedings of the British Machine Vision Conference (2014), pp. 1–11.

42. H. Y. Chong, S. J. Gortler, and T. Zickler, “The von Kries hypothesis and a basis for color constancy,” in 2007 IEEE 11th International Conference on Computer Vision (2007), pp. 1–8.

43. G. D. Finlayson, M. S. Drew, and B. V. Funt, “Color constancy: generalized diagonal transforms suffice,” J. Opt. Soc. Am. A 11(11), 3011–3019 (1994). [CrossRef]  

44. S. D. Hordley, “Scene illuminant estimation: past, present, and future,” Color Res. Appl. 31(4), 303–314 (2006). [CrossRef]  

45. A. Gijsenij, T. Gevers, and J. Van De Weijer, “Improving color constancy by photometric edge weighting,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 918–929 (2012). [CrossRef]  

46. S. Kawada, R. Kuroda, and S. Sugawa, “Color reproductivity improvement with additional virtual color filters for WRGB image sensor,” in Color Imaging XVIII (2013), pp. 1–7.

47. C. Park, K. Song, and M. Kang, “G-channel restoration for RWB CFA with double-exposed W channel,” Sensors 17(2), 293 (2017). [CrossRef]  

48. P.-H. Su, P.-C. Chen, and H. H. Chen, “Compensation of spectral mismatch to enhance WRGB demosaicking,” in IEEE International Conference on Image Processing (2015), pp. 68–72.

49. L. J. Van Vliet, I. T. Young, and J. J. Gerbrands, Fundamentals of image processing (Delft University of Technology, 1998).

50. J. G. Skellam, “The frequency distribution of the difference between two Poisson variates belonging to different populations,” J. Roy. Statist. Soc. (N. S. 109(3), 296 (1946). [CrossRef]  

51. Y. Hwang, J.-S. Kim, and I.-S. Kweon, “Sensor noise modeling using the Skellam distribution: Application to the color edge detection,” in 2007 IEEE Conference on Computer Vision and Pattern Recognition (2007), pp. 1–8.

52. J. Vaillant, A. Clouet, and D. Alleysson, “Color correction matrix for sparse RGB-W image sensor without IR cutoff filter,” in Unconventional Optical Imaging (2018), p. 1067704.

53. P. V. Gehler, C. Rother, A. Blake, T. Minka, and T. Sharp, “Bayesian color constancy revisited,” in 2008 IEEE Conference on Computer Vision and Pattern Recognition (2008), pp. 1–8.

54. L. Shi and B. Funt, “Re-processed version of the gehler color constancy dataset of 568 images,” Accessed from http://www.cs.sfu.ca/∼colour/data/ (2010).

55. K. Barnard, L. Martin, B. Funt, and A. Coath, “A data set for color research,” Color Res. Appl. 27(3), 147–151 (2002). [CrossRef]  

56. F. Ciurea and B. Funt, “A large image database for color constancy research,” in Color and Imaging Conference (2003), pp. 160–164.

57. D. Cheng, D. K. Prasad, and M. S. Brown, “Illuminant estimation for color constancy: why spatial-domain methods work and the role of the color distribution,” J. Opt. Soc. Am. A 31(5), 1049–1058 (2014). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Various color filter array (CFA) patterns. (a) Bayer CFA [5] which is widely used in digital cameras. (b) Sony RGBW CFA [6]. (c) Bayer-like RGBW CFA [8]. (d) Offset pixel aperture (OPA) RGBW CFA [14] whose white pixel is covered with the micro-aperture.
Fig. 2.
Fig. 2. The heat maps of the gray index extracted from various algorithms. (a) Input image of the Macbeth ColorChecker. (b) Ground truth of the gray index (c) Estimated gray index of color constancy using gray pixels [23]. (d) Estimated gray index of improved color constancy using gray pixels [24]. (e) Estimated gray index using $\Delta \log {I_W}(x,y).$ (f) Estimated gray index of the proposed method.
Fig. 3.
Fig. 3. The influence of the variable parameters on the white balance performance in our dataset. (a) Relationship between the angular error and the parameter c of the proposed method. (b) Relationship between the angular error and the parameter n% of the GP15.
Fig. 4.
Fig. 4. Visual results of each algorithm for achromatic color patches of the Macbeth ColorChecker. (a) The results under 6500 K illuminant. (b) The results under 4000 K illuminant. (c) The results under 3500 K illuminant. (d) The results under 2856 K illuminant.
Fig. 5.
Fig. 5. Examples of the of the dataset captured from OPA sensor [14] which has RGBW CFA.
Fig. 6.
Fig. 6. Visual results of each method applied to the image of the dataset. The angular error is displayed at the bottom left of each image.
Fig. 7.
Fig. 7. Comparison of the mean angular error for each patch of the Macbeth ColorChecker.
Fig. 8.
Fig. 8. Visual results of color correction for each patch in Macbeth ColorChecker. The average angular error of the proposed method is the lowest compared with other methods.
Fig. 9.
Fig. 9. Results of each white balance method on images captured on the red solid background. The angular error is displayed at the bottom left of each image.

Tables (4)

Tables Icon

Table 1. The angular errors of each algorithm for the gray patches of the Macbeth ColorCheck under various illuminations.

Tables Icon

Table 2. Performance of various method and the proposed method on the OPA sensor image dataset.

Tables Icon

Table 3. The results of the angular error and the color difference for achromatic color patches and chromatic color patches in the Macbeth ColorChecker.

Tables Icon

Table 4. The results of the color correction methods with and without the proposed adaptive white preserving color correction.

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

I i ( x , y ) = m b ( x , y ) E ( λ ) S i ( λ ) R b ( x , y , λ ) d λ + m s ( x , y ) E ( λ ) S i ( λ ) R s ( x , y , λ ) d λ ,
I i ( x , y ) = m ( x , y ) E ( λ ) S i ( λ ) R ( x , y , λ ) d λ ,
I i ( x , y ) = E i ( x , y ) R i ( x , y ) ,
log I i ( x , y ) = log ( E i ( x , y ) R i ( x , y ) ) = log E i ( x , y ) + log R i ( x , y ) .
Δ log I W ( x , y ) = | log I W ( x , y ) log I W ( x , y ) | = | log R W ( x , y ) log R W ( x , y ) | .
I W ( x , y ) = k ^ R I R ( x , y ) + k ^ G I G ( x , y ) + k ^ B I B ( x , y ) + k ^ O ,
Δ I W ( x , y ) = I W ( x , y ) I W ( x , y ) .
f ( k ; μ 1 , μ 2 ) = e ( μ 1 + μ 2 ) ( μ 1 μ 2 ) k 2 I k ( 2 μ 1 μ 2 ) ,
μ s = μ 1 μ 2 ,
σ s = μ 1 + μ 2 .
G I ( x , y ) = { 1 | Δ I W ( x , y ) | c σ s if Δ I W ( x , y ) < c σ s 0 otherwise ,
G I ( x , y ) = A F 5 { G I ( x , y ) } ,
e j = 1 N x , y I j ( x , y ) G I ( x , y ) , j { R , G , B } ,
I j W B ( x , y ) = I j ( x , y ) e R + e G + e B 3 e j , j { R , G , B } ,
t = M c ,
M ^ = arg min M | | T M C | | F 2 ,
M w p ^ = w G I M a ^ + ( 1 w G I ) M ^ ,
E r e c = cos 1 ( e g t e e s t | | e g t | | | | e e s t | | ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.