Abstract
Many types of RGBW color filter array (CFA) have been proposed for various purposes. Most studies utilize white pixel intensity for improving the signal-to-noise ratio of the image and demosaicing the image, but we note that the white pixel intensity can also be utilized to improve color reproduction. In this paper, we propose a color reproduction pipeline for RGBW CFA sensors based on a fast, accurate, and hardware-friendly gray pixel detection using white pixel intensity. The proposed color reproduction pipeline was tested on a dataset captured from an OPA sensor which has RGBW CFA. Experimental results show that the proposed pipeline estimates the illumination more accurately and preserves the achromatic color better than conventional methods which do not use white pixel intensity.
© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
1. Introduction
Many types of color filter arrays (CFA) have recently been proposed [1–5] to improve image quality. As one of these efforts, the white pixel, which is a transparent filter element, is included in CFA [6–8] shown in Fig. 1. The wide spectral characteristics of the white pixel are utilized not only to enhance the brightness and reflectance in displays [9,10], but also to increase the signal-to-noise ratio of the color image in cameras [11]. In addition, the white pixel with integrated micro-aperture is used to obtain the color image and depth-related disparity simultaneously in a single-lens imaging system [12–14]. We noted that white pixels can also be used to improve image color reproduction.
The conventional color reproduction pipeline consists of two steps: white balance and color correction. White balance is motivated by the color constancy ability of the human visual system to perceive the object color as approximately constant regardless of the color of the light source [15]. The goal of the computational color constancy is eliminating the color cast triggered by incident light. In order to render the image under the white light source regardless of the incident illumination, most color constancy methods first estimate the scene illumination from the color-biased image captured from the image sensor. They then correct the image by applying the transform matrix calculated from the color of the estimated illuminant. Various approaches for illumination estimation are mainly divided into statistical methods [16–18] and learning-based methods [19–21]. Although learning-based approaches show remarkable performance, training data dependency [22], expensive hardware requirement, and slow running speed make it difficult to use in practical applications. Based on the hypothesis that most of the natural images contain detectable gray pixels, gray pixel detection approaches [23,24] estimate the illumination quickly and as accurately as the learning-based approach. However, since these methods use a local window to detect gray pixels, it is difficult to extract reliable gray pixels in uniform regions and boundaries of different color surfaces. In addition, the gray index sorting algorithm, which is used in previous works to select the reliable top n% gray pixels for every frame, increases hardware cost.
The objective of color correction is to transform the device-dependent color space into the device-independent color space. Since the spectral sensitivity of the image sensor is usually different from that of the desired color, color correction is essential step in the color reproduction. Among many proposed algorithms [25–29], least square regression approaches [30–32] are widely used due to their low computational complexity. White color preservation is one of the most important features of color reproduction. However, the color correction matrix obtained by least square regression causes the white color to change as it minimizes the colorimetric error of all calibration color. A constrained least square regression [33,34] is proposed with a constraint to map the selected white color exactly. Although the constrained least square regression performs well for a hypothetical set that contains all surface reflectance, the colorimetric error increases when the input image does not contain sufficient white reflectance.
In this paper, we propose a color reproduction pipeline for RGBW sensors, where the limitations of white balance and color correction are resolved by using the white pixel intensity. The contributions of the proposed pipeline are as follows.
- • We propose a fast, accurate, and hardware-friendly gray pixel detection algorithm for the RGBW sensors. Since the proposed algorithm does not use the local window, a reliable gray pixel can be detected even in the uniform region. Instead of using the computationally heavy sorting algorithm, the properties of the Skellam distribution are used to efficiently distinguish reliable gray pixels. To the best of our knowledge, the proposed algorithm is the first attempt to use white pixel intensity for illuminant estimation.
- • We propose an adaptive white preserved color correction that tunes the color correction matrix according to the gray index of each pixel calculated from the gray pixel detection algorithm. The tuning of the color correction matrix is performed by a weighted summation of color correction matrices with the gray index as a weight factor. Such color correction matrix tuning preserves the achromatic color while keeping the chromatic color from deteriorating.
2. Gray pixel detection using the white pixel intensity
The achromatic region provides an important cue to estimate illumination as it reflects the color of the incident light of the image. If the achromatic region is extracted from the color-biased image, the scene illumination can be accurately estimated.
Yang et al. verified the hypothesis that most of the natural images in the real world contain detectable gray pixels, which can be used for illuminant estimation [23,24]. Based on this hypothesis, previous works defined an illuminant-invariant measure (IIM) using a local contrast or local gradient. These methods may not be able to detect reliable gray pixels that are isolated or located within a uniform region as the IIM is calculated within the local window. To solve the problem, we defined IIM using white pixel intensity in the RGBW sensor. The mathematical derivation is as follows.
The pixel intensity ${I_i}(x,y) \in \{ {I_R},{I_G},{I_B},{I_W}\} $ at $(x,y)$ can be represented as the dichromatic reflection model [35,36] as follows:
The goal of the white balance is to estimate $E(\lambda )$ from ${I_i}(x,y)$, and render the image under the white light source. However, this is a problem since the only thing that we can get from the image sensor is the pixel intensity that changes not only by the spectral distribution of the illuminant but also by the reflectance of the object. Thus, additional assumptions are needed to solve the problem.
Although the pixel intensity depends on specular reflection, numerous illuminant estimation algorithms ignore it for simplicity based on the Lambertian reflectance model [37–39] as follows:
where $m(x,y)$ is the Lambertian shading, and $R(x,y,\lambda )$ is the surface reflectance. In practice, it is impossible to estimate the continuous functions of the illuminant by using the pixel intensity which is integrated with the wavelength [18,40,41]. The von Kries coefficient law [42] is adopted in numerous literatures [43–46] to transform (2) into a simplified diagonal model as where ${E_i}(x,y)$ is the diagonal matrix of illumination and ${R_i}(x,y)$ is the reflectance. In logarithmic space, the pixel intensity is expressed as the summation of the logarithms of the ${E_i}(x,y)$ and ${R_i}(x,y)$ as Suppose the illuminant is uniform within R, G, B, and W pixels at the same position $(x,y)$. Once we estimate the white pixel intensity from R, G, and B pixels as ${I^{\prime}_W}(x,y)$, the difference between $\log {I_W}(x,y)$ and $\log {I^{\prime}_W}(x,y)$ is independent of the illuminant ${E_i}(x,y)$ in logarithmic space, which can be expressed asIn order to distinguish the reliable IIM at each pixel, the standard deviation of IIM is considered. It is known that the pixel intensity is determined by the number of photons which follows the laws of quantum physics [49]. As the probability distribution of the arrival of photons on a pixel follows the Poisson distribution, the probability distribution of $\Delta \log {I_W}(x,y)$ is the probability distribution of the difference of two random variables which are the logarithm of the Poisson distribution. Since it is too complicated to calculate the standard deviation of the $\Delta \log {I_W}(x,y)$, the difference of ${I_W}(x,y)$ and ${I^{\prime}_W}(x,y)$ is designated as IIM under the assumption that $\Delta {I_W}(x,y)$ also reasonably eliminates the effect of the illuminant. The IIM for estimating gray pixels is calculated as follows:
Since the IIM is calculated at each pixel, it can be used in a uniform region, boundary region, single illuminant environment, and multi-illuminant environment. Then, the probability distribution of IIM follows the Skellam distribution [50,51] because the difference between two Poisson random variables is defined as the Skellam distribution, which can be expressed as3. Color reproduction with gray pixel
3.1 White balance using the gray pixel detection
The color of the illuminant can be estimated from the $G{I^\ast }$ which has illuminant characteristics. According to the $G{I^\ast }$ definition, the $G{I^\ast }$ magnitude of the achromatic color is close to one. The estimated illuminant color ${e_j}$ is calculated as
where N is the number of the non-zero $G{I^\ast }$. The white balance is performed for the pixel value in the input image by where $I_j^{WB}$ is the white-balanced pixel value.3.2 Adaptive color correction considering grayness
For a $ 3 \times \textrm{1}$ input vector ${\textbf c}$ formed by R, G, and B sensor responses, the linear color correction transform is typically performed by
where ${\textbf t}$ is a $3 \times \textrm{1}$ vector of the known standard RGB values that we want to restore, and ${\textbf M}$ is a $3 \times 3$ color correction matrix. Unlike typical cases, the elements of ${\textbf c}$ are replaced with white-balanced pixel values, not sensor responses because the white balance is followed by color correction in our color reproduction pipeline. For a set of N training patches, we denote the $3 \times N$ target color matrix and the $3 \times N$ white-balanced color as ${\textbf T}$ and ${\textbf C}$, respectively. The color correction matrix is generally calculated by least square regression asThe objective of the adaptive color correction considering grayness is to map the achromatic color with low colorimetric error using gray pixel detection, and to keep the chromatic color from deteriorating. A dedicated color correction matrix $\widehat {{{\textbf M}_a}}$ for a sub-dataset containing only achromatic color patches is calculated by least square regression. The adaptive white preserved color correction matrix $\widehat {{{\textbf M}_{wp}}}$ is calculated as
4. Experimental results
Conventional datasets such as the Gehler-Shi [53,54], the SFU [55,56], and the NUS [57] cannot be used to evaluate the proposed color reproduction pipeline since they do not have the white pixel intensity. Therefore, we captured a dataset using the full color OPA sensor [14] whose color filter array is RGBW. The unit pixel size of the OPA sensor is $\textrm{2}\textrm{.8} \times \textrm{2}\textrm{.8 }\mathrm{\mu} {\textrm{m}^2}$ and the OPA sensor was fabricated using a $\textrm{0}\textrm{.11 }\mathrm{\mu} \textrm{m}$ CIS process. In the experiment, we used the raw OPA sensor images whose resolution is $\textrm{1544} \times \textrm{1100,}$ and a lens system with 6 mm focal length and 1.4 F-number. The X-Rite Macbeth ColorChecker Classic and X-Rite ColorChecker White Balance were used to evaluate the performance. The color difference $\Delta E_{ab}^\ast $ in CIELAB color space and the recovery angular error ${E_{rec}}$ are used as the evaluation metric. The recovery angular error ${E_{rec}}$ is defined as
4.1 White balance using the gray pixel detection
The proposed white balance method is compared with Gray-World (GW) [16], Shadow of Gray (SoG) [17], first-order Gray-Edge (GE1), second-order Gray-Edge (GE2) [18], Weighted Grey-Edge (WGE) [45], and color constancy using grey pixels (GP15 [23], GP18 [24]).
In the proposed method, there is one variable parameter c, which determines the threshold of the standard deviation of the Skellam distribution to detect reliable gray pixels. As shown in Fig. 3(a), the angular error was calculated by changing c in the dataset to determine the optimal parameter c. The proposed method shows stable performance when c is between 0.001 and 0.1. c = 0.001 was used in the following experiments because it shows the best performance. The parameter n of GP15 and GP18, which is the percentage of detected gray pixels, was determined to be 10% as the minimal angular error is obtained there as shown in Fig. 3(b).
First, we evaluated various white balance algorithms with achromatic color patches of the Macbeth ColorChecker under various illuminants produced by the X-Rite SpectraLight QC light booth. The results of this experiment are listed in Table 1. The smallest average angular error for all illuminants, an error of 0.991$^\circ $, is obtained with the proposed method. Figure 4 shows the visual results of each algorithm for achromatic color patches of the Macbeth ColorChecker.
Next, we evaluated the performance in the OPA sensor image dataset, which consists of 42 images captured indoors such as classroom, hallway, bookshelf, stairs, and poster board, and 62 images captured with various objects such as office supplies, cups, dolls, and miniatures under 4 different illuminant sources (6500 K, 4000 K, 3500K, and 2856 K), as shown in Fig. 5. For the estimation of the actual illuminant color of the scene, every image contains the white board which is masked out when the illuminant estimation is in processing. The exact position of the white board was manually labeled. The results of the proposed method and other methods are listed in Table 2. For both the mean angular error and the mean color difference, the proposed method outperforms other methods. Compared with the second-best methods, the mean angular error and the mean color difference of the proposed method are 20.6% and 27.7% lower, respectively. Figure 6 shows the visual results of each method applied to the image of the dataset.
4.2 Adaptive color correction considering grayness
The proposed adaptive color correction considering grayness method is compared with the white-point preserving least square (WPPLS) regression method [34] which shares the same goal of preserving white. Moreover, since the proposed method is able to be applied to LCC, PCC [30], RPCC [31], and 3 × 4 CCM [52], we compared the results with and without the proposed method applied.
WPPLS finds the optimal color correction matrix with the constraint that a particular surface reflectance is mapped without error. In this experiment, the 19th patch of the Macbeth ColorChecker is used as the constraint surface reflectance of WPPLS. The color correction matrices of WPPLS and the proposed method are calculated based on the LCC. The angular errors are compared among the LCC, WPPLS, and the proposed method in Fig. 7. Although the angular error of WPPLS was the smallest for the 9th, 18th to 20th patches, the angular error of the proposed algorithm was the smallest for most of the other patches, including achromatic color patches.
The results for achromatic color patches (19th to 24th) and chromatic color patches (1st to 18th) of each method are summarized in Table 3. The mean and median angular errors of LCC with the proposed method are the lowest in all cases. As the white preservation methods focus on achromatic color preservation, which is important in color reproduction, they increase chromatic color errors by definition [34]. Given the importance of the white, the results of the color difference show that the proposed method preserves the achromatic color and prevents the chromatic color degradation. The visual comparison of the color correction results for the Macbeth ColorChecker is shown in Fig. 8.
In addition, we evaluated the performance of the LCC, PCC, RPCC, and 3 × 4 CCM with and without the proposed adaptive white preserving approach. The results for achromatic color patches and chromatic color patches are listed in Table 4. For achromatic color patches, the angular error and the color difference of the proposed method are lower in all cases than those using other methods. In summary, we can obtain the lowest errors for achromatic colors with the proposed method, while keeping the chromatic colors from deteriorating.
5. Limitation and future work
The proposed white balance performs based on the validated hypothesis [23] that most of the natural images in the real world contain detectable gray pixels. As an extreme case, if there are no gray regions or there are no reliable gray pixels in the image, the illuminant estimation accuracy will be worse. To model this situation, the test images captured on the red solid background were used for evaluation. The results of various algorithms are compared in Fig. 9. Since the white board was reflected on the floor, the right side of the images, including the white board and its reflection, was masked out during the illuminant estimation. As compared to the angular error for the dataset images shown in Fig. 6, the angular error of each algorithm is quite increased as shown in Fig. 9. Despite challenging conditions, the angle error of the proposed method was the lowest among the competitors.
The OPA sensor dataset used in the experiment was taken under the single illumination condition. The illuminant estimation using the proposed method under the multiple-illumination condition is a subject of future work.
6. Conclusion
In this paper, we have proposed a color reproduction pipeline for RGBW sensors based on newly proposed gray pixel detection using white pixel intensity. A fast, accurate, and hardware-friendly illuminant-invariant measure for gray pixel detection was developed using the white pixel intensity. The proposed gray pixel detection algorithm can detect reliable gray pixels even in a uniform region, and be implemented in hardware with a 48 k NAND2 gate count and 9.5 kB SRAM. To the best of our knowledge, the proposed white balance method is the first attempt to use white pixel intensity for illuminant estimation. Experimental results show that the proposed pipeline obtains better results in terms of illumination estimation accuracy and achromatic color preservation than the compared methods.
Funding
Center for Integrated Smart Sensors funded by the Ministry of Science and ICT, South Korea (CISS-2013M3A6A6073718); Samsung (G01180228).
Disclosures
The authors declare no conflicts of interest.
References
1. R. H. Kröger, “Anti-aliasing in image recording and display hardware: lessons from nature,” J. Opt. A: Pure Appl. Opt. 6(8), 743–748 (2004). [CrossRef]
2. R. Lukac and K. N. Plataniotis, “Color filter arrays: Design and performance analysis,” IEEE Trans. Consumer Electronics 51(4), 1260–1267 (2005). [CrossRef]
3. Y. Monno, S. Kikuchi, M. Tanaka, and M. Okutomi, “A practical one-shot multispectral imaging system using a single image sensor,” IEEE Trans. Image Process. 24(10), 3048–3059 (2015). [CrossRef]
4. J. Couillaud, A. Horé, and D. Ziou, “Nature-inspired color-filter array for enhancing the quality of images,” J. Opt. Soc. Am. A 29(8), 1580–1587 (2012). [CrossRef]
5. B. E. Bayer, “Color imaging array,” U.S. patent 3,971,065 (1976).
6. I. Hirota, “Solid-state imaging device, method for processing signal of solid-state imaging device, and imaging apparatus,” U.S. patent 8,436,925 (2013).
7. J. T. Compton and J. F. Hamilton Jr, “Image sensor with improved light sensitivity,” U.S. patent 8,139,130 (2005).
8. H. Honda, Y. Iida, G. Itoh, Y. Egawa, and H. Seki, “A novel Bayer-like WRGB color filter array for CMOS image sensors,” in Human Vision and Electronic Imaging XII (2007), p. 64921J.
9. Y. Kwak, J. Park, and D.-S. Park, “Generating vivid colors on red-green-blue-white electonic-paper display,” Appl. Opt. 47(25), 4491–4500 (2008). [CrossRef]
10. Y. Xiong, L. Wang, W. Xu, J. Zou, H. Wu, Y. Xu, J. Peng, J. Wang, Y. Cao, and G. Yu, “Performance analysis of PLED based flat panel display with RGBW sub-pixel layout,” Org. Electron. 10(5), 857–862 (2009). [CrossRef]
11. S. Jee, K. Song, and M. Kang, “Sensitivity and resolution improvement in RGBW color filter array sensor,” Sensors 18(5), 1647 (2018). [CrossRef]
12. B.-S. Choi, S.-H. Kim, J. Lee, D. Seong, J.-K. Shin, S. Chang, J. Park, and S.-J. Lee, “CMOS image sensor for extracting depth information using pixel aperture technique,” in 2018 IEEE International Instrumentation and Measurement Technology Conference (2018), pp. 1–5.
13. J. Lee, B.-S. Choi, S.-H. Kim, J. Lee, J. Lee, S. Chang, J. Park, S.-J. Lee, and J.-K. Shin, “Effects of Offset Pixel Aperture Width on the Performances of Monochrome CMOS Image Sensors for Depth Extraction,” Sensors 19(8), 1823 (2019). [CrossRef]
14. B.-S. Choi, J. Lee, S.-H. Kim, S. Chang, J. Park, S.-J. Lee, and J.-K. Shin, “Analysis of Disparity Information for Depth Extraction Using CMOS Image Sensor with Offset Pixel Aperture Technique,” Sensors 19(3), 472 (2019). [CrossRef]
15. L. T. Maloney and B. A. Wandell, “Color constancy: a method for recovering surface spectral reflectance,” J. Opt. Soc. Am. A 3(1), 29–33 (1986). [CrossRef]
16. G. Buchsbaum, “A spatial processor model for object colour perception,” J. Franklin Inst. 310(1), 1–26 (1980). [CrossRef]
17. G. D. Finlayson and E. Trezzi, “Shades of gray and colour constancy,” in Color and Imaging Conference (2004), pp. 37–41.
18. J. Van De Weijer, T. Gevers, and A. Gijsenij, “Edge-based color constancy,” IEEE Trans. Image Process. 16(9), 2207–2214 (2007). [CrossRef]
19. H. R. V. Joze and M. S. Drew, “Exemplar-based color constancy and multiple illumination,” IEEE Trans. Pattern Anal. Mach. Intell. 36(5), 860–873 (2014). [CrossRef]
20. A. Gijsenij and T. Gevers, “Color constancy using natural image statistics and scene semantics,” IEEE Trans. Pattern Anal. Mach. Intell. 33(4), 687–698 (2011). [CrossRef]
21. A. Gijsenij, T. Gevers, and J. Van De Weijer, “Generalized gamut mapping using image derivative structures for color constancy,” Int. J. Comput. Vis. 86(2-3), 127–139 (2010). [CrossRef]
22. S.-B. Gao, M. Zhang, C.-Y. Li, and Y.-J. Li, “Improving color constancy by discounting the variation of camera spectral sensitivity,” J. Opt. Soc. Am. A 34(8), 1448–1462 (2017). [CrossRef]
23. K.-F. Yang, S.-B. Gao, and Y.-J. Li, “Efficient illuminant estimation for color constancy using grey pixels,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 2254–2263.
24. X. Yang, X. Jin, and J. Zhang, “Improved single-illumination estimation accuracy via redefining the illuminant-invariant descriptor and the grey pixels,” Opt. Express 26(22), 29055–29067 (2018). [CrossRef]
25. P.-C. Hung, “Colorimetric calibration in electronic imaging devices using a look-up-table model and interpolations,” J. Electron. Imaging 2(1), 53–62 (1993). [CrossRef]
26. J. S. McElvain and W. Gish, “Camera color correction using two-dimensional transforms,” in Color and Imaging Conference (2013), pp. 250–256.
27. P.-C. Hung, “Color rendition using three-dimensional interpolation,” Imaging Applications in the Work World 0900, 111–115 (1988). [CrossRef]
28. H. R. Kang and P. G. Anderson, “Neural network applications to the color scanner and printer calibrations,” J. Electron. Imaging 1(2), 125–136 (1992). [CrossRef]
29. V. Cheung, S. Westland, D. Connah, and C. Ripamonti, “A comparative study of the characterisation of colour cameras by means of neural networks and polynomial transforms,” Color. Technol. 120(1), 19–25 (2004). [CrossRef]
30. G. Hong, M. R. Luo, and P. A. Rhodes, “A study of digital camera colorimetric characterization based on polynomial modeling,” Color Res. Appl. 26(1), 76–84 (2001). [CrossRef]
31. G. D. Finlayson, M. Mackiewicz, and A. Hurlbert, “Color correction using root-polynomial regression,” IEEE Trans. Image Process. 24(5), 1460–1470 (2015). [CrossRef]
32. S. Lim and A. Silverstein, “Spatially varying color correction (SVCC) matrices for reduced noise,” in Color and Imaging Conference (2004), pp. 76–81.
33. G. D. Finlayson and M. S. Drew, “White-point preserving color correction,” in Color and Imaging Conference (1997), pp. 258–261.
34. G. D. Finlayson and M. S. Drew, “Constrained least-squares regression in color spaces,” J. Electron. Imaging 6(4), 484–494 (1997). [CrossRef]
35. S. A. Shafer, “Using color to separate reflection components,” Color Res. Appl. 10(4), 210–218 (1985). [CrossRef]
36. G. J. Klinker, S. A. Shafer, and T. Kanade, “A physical approach to color image understanding,” Int. J. Comput. Vis. 4(1), 7–38 (1990). [CrossRef]
37. M. Oren and S. K. Nayar, “Generalization of Lambert's reflectance model,” in Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques (1994), pp. 239–246.
38. W. Shi, C. C. Loy, and X. Tang, “Deep specialized network for illuminant estimation,” in European Conference on Computer Vision (2016), pp. 371–387.
39. G. D. Finlayson, S. D. Hordley, and P. M. Hubel, “Color by correlation: A simple, unifying framework for color constancy,” IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1209–1221 (2001). [CrossRef]
40. A. Gijsenij, T. Gevers, and J. Van De Weijer, “Computational color constancy: Survey and experiments,” IEEE Trans. Image Process. 20(9), 2475–2489 (2011). [CrossRef]
41. G. D. Finlayson and R. Zakizadeh, “Reproduction angular error: An improved performance metric for illuminant estimation,” in Proceedings of the British Machine Vision Conference (2014), pp. 1–11.
42. H. Y. Chong, S. J. Gortler, and T. Zickler, “The von Kries hypothesis and a basis for color constancy,” in 2007 IEEE 11th International Conference on Computer Vision (2007), pp. 1–8.
43. G. D. Finlayson, M. S. Drew, and B. V. Funt, “Color constancy: generalized diagonal transforms suffice,” J. Opt. Soc. Am. A 11(11), 3011–3019 (1994). [CrossRef]
44. S. D. Hordley, “Scene illuminant estimation: past, present, and future,” Color Res. Appl. 31(4), 303–314 (2006). [CrossRef]
45. A. Gijsenij, T. Gevers, and J. Van De Weijer, “Improving color constancy by photometric edge weighting,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 918–929 (2012). [CrossRef]
46. S. Kawada, R. Kuroda, and S. Sugawa, “Color reproductivity improvement with additional virtual color filters for WRGB image sensor,” in Color Imaging XVIII (2013), pp. 1–7.
47. C. Park, K. Song, and M. Kang, “G-channel restoration for RWB CFA with double-exposed W channel,” Sensors 17(2), 293 (2017). [CrossRef]
48. P.-H. Su, P.-C. Chen, and H. H. Chen, “Compensation of spectral mismatch to enhance WRGB demosaicking,” in IEEE International Conference on Image Processing (2015), pp. 68–72.
49. L. J. Van Vliet, I. T. Young, and J. J. Gerbrands, Fundamentals of image processing (Delft University of Technology, 1998).
50. J. G. Skellam, “The frequency distribution of the difference between two Poisson variates belonging to different populations,” J. Roy. Statist. Soc. (N. S. 109(3), 296 (1946). [CrossRef]
51. Y. Hwang, J.-S. Kim, and I.-S. Kweon, “Sensor noise modeling using the Skellam distribution: Application to the color edge detection,” in 2007 IEEE Conference on Computer Vision and Pattern Recognition (2007), pp. 1–8.
52. J. Vaillant, A. Clouet, and D. Alleysson, “Color correction matrix for sparse RGB-W image sensor without IR cutoff filter,” in Unconventional Optical Imaging (2018), p. 1067704.
53. P. V. Gehler, C. Rother, A. Blake, T. Minka, and T. Sharp, “Bayesian color constancy revisited,” in 2008 IEEE Conference on Computer Vision and Pattern Recognition (2008), pp. 1–8.
54. L. Shi and B. Funt, “Re-processed version of the gehler color constancy dataset of 568 images,” Accessed from http://www.cs.sfu.ca/∼colour/data/ (2010).
55. K. Barnard, L. Martin, B. Funt, and A. Coath, “A data set for color research,” Color Res. Appl. 27(3), 147–151 (2002). [CrossRef]
56. F. Ciurea and B. Funt, “A large image database for color constancy research,” in Color and Imaging Conference (2003), pp. 160–164.
57. D. Cheng, D. K. Prasad, and M. S. Brown, “Illuminant estimation for color constancy: why spatial-domain methods work and the role of the color distribution,” J. Opt. Soc. Am. A 31(5), 1049–1058 (2014). [CrossRef]