Abstract

The estimation of illuminant color is mandatory for many applications in the field of color image quantification. However, it is an unresolved problem if no additional heuristics or restrictive assumptions apply. Assuming uniformly colored and roundly shaped objects, Lee has presented a theory and a method for computing the scene-illuminant chromaticity from specular highlights [H. C. Lee, J. Opt. Soc. Am. A 3, 1694 (1986)]. However, Lee’s method, called image path search, is less robust to noise and is limited in the handling of microtextured surfaces. We introduce a novel approach to estimate the color of a single illuminant for noisy and microtextured images, which frequently occur in real-world scenes. Using dichromatic regions of different colored surfaces, our approach, named color line search, reverses Lee’s strategy of image path search. Reliable color lines are determined directly in the domain of the color diagrams by three steps. First, regions of interest are automatically detected around specular highlights, and local color diagrams are computed. Second, color lines are determined according to the dichromatic reflection model by Hough transform of the color diagrams. Third, a consistency check is applied by a corresponding path search in the image domain. Our method is evaluated on 40 natural images of fruit and vegetables. In comparison with those of Lee’s method, accuracy and stability are substantially improved. In addition, the color line search approach can easily be extended to scenes of objects with macrotextured surfaces.

© 2001 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. D. H. Brainard, “Color constancy in the nearly natural image. 2. Achromatic loci,” J. Opt. Soc. Am. A 15, 307–325 (1998).
    [CrossRef]
  2. I. Kuriki, K. Uchikawa, “Limitations of surface-color and apparent-color constancy,” J. Opt. Soc. Am. A 13, 1622–1636 (1996).
    [CrossRef]
  3. K.-H. Bäuml, “Color constancy: the role of image surface in illuminant adjustment,” J. Opt. Soc. Am. A 16, 1521–1530 (1999).
    [CrossRef]
  4. E. W. Jin, S. K. Shevell, “Color memory and color constancy,” J. Opt. Soc. Am. A 13, 1981–1991 (1996).
    [CrossRef]
  5. M. D’Zmura, P. Lennie, “Mechanisms of color constancy,” J. Opt. Soc. Am. A 3, 1662–1672 (1986).
    [CrossRef] [PubMed]
  6. M. D’Zmura, G. Iverson, “Color constancy. III. General linear recovery of spectral descriptions for lights and surfaces,” J. Opt. Soc. Am. A 11, 2389–2400 (1994).
    [CrossRef]
  7. D. H. Brainard, W. T. Freeman, “Bayesian color constancy,” J. Opt. Soc. Am. A 14, 1393–1411 (1997).
    [CrossRef]
  8. S. Tominaga, “Multichannel vision system for estimating surface and illuminant functions,” J. Opt. Soc. Am. A 13, 2163–2173 (1996).
    [CrossRef]
  9. J. Ho, B. V. Funt, M. S. Drew, “Separating a color signal into illumination and surface reflectance components: theory and applications,” IEEE Trans. Pattern Anal. Mach. Intell. 12, 966–977 (1990).
    [CrossRef]
  10. S. Tominaga, B. A. Wandell, “Standard surface-reflection model and illuminant estimation,” J. Opt. Soc. Am. A 6, 576–584 (1989).
    [CrossRef]
  11. S. Tominaga, B. A. Wandell, “Component estimation of surface spectral reflectance,” J. Opt. Soc. Am. A 7, 312–317 (1990).
    [CrossRef]
  12. S. Tominaga, “Surface identification using the dichromatic reflection model,” IEEE Trans. Pattern Anal. Mach. Intell. 13, 658–670 (1991).
    [CrossRef]
  13. G. Healey, D. Slater, “Global color constancy: recognition of objects by use of illumination-invariant properties of color distributions,” J. Opt. Soc. Am. A 11, 3003–3010 (1994).
    [CrossRef]
  14. B. V. Funt, G. D. Finlayson, “Color constant color indexing,” IEEE Trans. Pattern Anal. Mach. Intell. 17, 522–533 (1995).
    [CrossRef]
  15. L. T. Maloney, B. A. Wandell, “Color constancy: a method for recovering surface spectral reflectances,” J. Opt. Soc. Am. A 3, 29–33 (1986).
    [CrossRef] [PubMed]
  16. C. L. Novak, S. A. Shafer, “Supervised color constancy using a color chart,” (School of Computer Science, Carnegie Mellon University, Pittsburgh, Pa., 1990).
  17. C. Palm, I. Scholl, T. Lehmann, K. Spitzer, “Quantitative color measurement in laryngoscopic images,” in POSTER98 (Faculty of Electrical Engineering, Czech Technical University, Prague, 1998), Paper NS22.
  18. H. Hassan, J. Ilgner, C. Palm, T. Lehmann, K. Spitzer, M. Westhofen, “Objective judgement in laryngoscopic images,” in Advances in Quantitative Laryngoscopy, Voice and Speech Research, T. Lehmann, C. Palm, K. Spitzer, T. Tolxdorff, eds. (RWTH, Aachen, Germany, 1998), pp. 135–142.
  19. G. Buchsbaum, “A spatial processor model for object colour perception,” J. Franklin Inst. 300, 1–26 (1980).
    [CrossRef]
  20. E. H. Land, “Recent advances in retinex theory,” Vision Res. 26, 7–21 (1986).
    [CrossRef] [PubMed]
  21. H.-C. Lee, “Method for computing scene-illuminant chromaticity from specular highlights,” J. Opt. Soc. Am. A 3, 1694–1699 (1986).
    [CrossRef] [PubMed]
  22. S. A. Shafer, “Using color to separate reflection components,” Color Res. Appl. 10, 210–218 (1985).
    [CrossRef]
  23. M. H. Brill, “Image segmentation by object color: a unifying framework and connection to color constancy,” J. Opt. Soc. Am. A 7, 2041–2047 (1990).
    [CrossRef] [PubMed]
  24. B. V. Funt, M. S. Drew, “Color space analysis of mutual illumination,” IEEE Trans. Pattern Anal. Mach. Intell. 15, 1319–1326 (1993).
    [CrossRef]
  25. V. F. Leavers, Shape Detection in Computer Vision Using the Hough Transform (Springer-Verlag, Berlin, 1992).
  26. Y.-L. Tian, H. T. Tsui, “Shape from shading for non-Lambertian surfaces from one color image,” in Proceedings of the 13th International Conference on Pattern Recognition (IEEE Computer Society, Los Alamitos, Calif., 1996), Vol. 1, pp. 258–262.
  27. B. Funt, K. Barnard, L. Martin, “Is machine colour constancy good enough?” in Proceedings of the 5th European Conference on Computer Vision (Springer-Verlag, Berlin, 1998), Vol. 1, pp. 445–459.

1999 (1)

1998 (1)

1997 (1)

1996 (3)

1995 (1)

B. V. Funt, G. D. Finlayson, “Color constant color indexing,” IEEE Trans. Pattern Anal. Mach. Intell. 17, 522–533 (1995).
[CrossRef]

1994 (2)

1993 (1)

B. V. Funt, M. S. Drew, “Color space analysis of mutual illumination,” IEEE Trans. Pattern Anal. Mach. Intell. 15, 1319–1326 (1993).
[CrossRef]

1991 (1)

S. Tominaga, “Surface identification using the dichromatic reflection model,” IEEE Trans. Pattern Anal. Mach. Intell. 13, 658–670 (1991).
[CrossRef]

1990 (3)

1989 (1)

1986 (4)

1985 (1)

S. A. Shafer, “Using color to separate reflection components,” Color Res. Appl. 10, 210–218 (1985).
[CrossRef]

1980 (1)

G. Buchsbaum, “A spatial processor model for object colour perception,” J. Franklin Inst. 300, 1–26 (1980).
[CrossRef]

Barnard, K.

B. Funt, K. Barnard, L. Martin, “Is machine colour constancy good enough?” in Proceedings of the 5th European Conference on Computer Vision (Springer-Verlag, Berlin, 1998), Vol. 1, pp. 445–459.

Bäuml, K.-H.

Brainard, D. H.

Brill, M. H.

Buchsbaum, G.

G. Buchsbaum, “A spatial processor model for object colour perception,” J. Franklin Inst. 300, 1–26 (1980).
[CrossRef]

D’Zmura, M.

Drew, M. S.

B. V. Funt, M. S. Drew, “Color space analysis of mutual illumination,” IEEE Trans. Pattern Anal. Mach. Intell. 15, 1319–1326 (1993).
[CrossRef]

J. Ho, B. V. Funt, M. S. Drew, “Separating a color signal into illumination and surface reflectance components: theory and applications,” IEEE Trans. Pattern Anal. Mach. Intell. 12, 966–977 (1990).
[CrossRef]

Finlayson, G. D.

B. V. Funt, G. D. Finlayson, “Color constant color indexing,” IEEE Trans. Pattern Anal. Mach. Intell. 17, 522–533 (1995).
[CrossRef]

Freeman, W. T.

Funt, B.

B. Funt, K. Barnard, L. Martin, “Is machine colour constancy good enough?” in Proceedings of the 5th European Conference on Computer Vision (Springer-Verlag, Berlin, 1998), Vol. 1, pp. 445–459.

Funt, B. V.

B. V. Funt, G. D. Finlayson, “Color constant color indexing,” IEEE Trans. Pattern Anal. Mach. Intell. 17, 522–533 (1995).
[CrossRef]

B. V. Funt, M. S. Drew, “Color space analysis of mutual illumination,” IEEE Trans. Pattern Anal. Mach. Intell. 15, 1319–1326 (1993).
[CrossRef]

J. Ho, B. V. Funt, M. S. Drew, “Separating a color signal into illumination and surface reflectance components: theory and applications,” IEEE Trans. Pattern Anal. Mach. Intell. 12, 966–977 (1990).
[CrossRef]

Hassan, H.

H. Hassan, J. Ilgner, C. Palm, T. Lehmann, K. Spitzer, M. Westhofen, “Objective judgement in laryngoscopic images,” in Advances in Quantitative Laryngoscopy, Voice and Speech Research, T. Lehmann, C. Palm, K. Spitzer, T. Tolxdorff, eds. (RWTH, Aachen, Germany, 1998), pp. 135–142.

Healey, G.

Ho, J.

J. Ho, B. V. Funt, M. S. Drew, “Separating a color signal into illumination and surface reflectance components: theory and applications,” IEEE Trans. Pattern Anal. Mach. Intell. 12, 966–977 (1990).
[CrossRef]

Ilgner, J.

H. Hassan, J. Ilgner, C. Palm, T. Lehmann, K. Spitzer, M. Westhofen, “Objective judgement in laryngoscopic images,” in Advances in Quantitative Laryngoscopy, Voice and Speech Research, T. Lehmann, C. Palm, K. Spitzer, T. Tolxdorff, eds. (RWTH, Aachen, Germany, 1998), pp. 135–142.

Iverson, G.

Jin, E. W.

Kuriki, I.

Land, E. H.

E. H. Land, “Recent advances in retinex theory,” Vision Res. 26, 7–21 (1986).
[CrossRef] [PubMed]

Leavers, V. F.

V. F. Leavers, Shape Detection in Computer Vision Using the Hough Transform (Springer-Verlag, Berlin, 1992).

Lee, H.-C.

Lehmann, T.

C. Palm, I. Scholl, T. Lehmann, K. Spitzer, “Quantitative color measurement in laryngoscopic images,” in POSTER98 (Faculty of Electrical Engineering, Czech Technical University, Prague, 1998), Paper NS22.

H. Hassan, J. Ilgner, C. Palm, T. Lehmann, K. Spitzer, M. Westhofen, “Objective judgement in laryngoscopic images,” in Advances in Quantitative Laryngoscopy, Voice and Speech Research, T. Lehmann, C. Palm, K. Spitzer, T. Tolxdorff, eds. (RWTH, Aachen, Germany, 1998), pp. 135–142.

Lennie, P.

Maloney, L. T.

Martin, L.

B. Funt, K. Barnard, L. Martin, “Is machine colour constancy good enough?” in Proceedings of the 5th European Conference on Computer Vision (Springer-Verlag, Berlin, 1998), Vol. 1, pp. 445–459.

Novak, C. L.

C. L. Novak, S. A. Shafer, “Supervised color constancy using a color chart,” (School of Computer Science, Carnegie Mellon University, Pittsburgh, Pa., 1990).

Palm, C.

H. Hassan, J. Ilgner, C. Palm, T. Lehmann, K. Spitzer, M. Westhofen, “Objective judgement in laryngoscopic images,” in Advances in Quantitative Laryngoscopy, Voice and Speech Research, T. Lehmann, C. Palm, K. Spitzer, T. Tolxdorff, eds. (RWTH, Aachen, Germany, 1998), pp. 135–142.

C. Palm, I. Scholl, T. Lehmann, K. Spitzer, “Quantitative color measurement in laryngoscopic images,” in POSTER98 (Faculty of Electrical Engineering, Czech Technical University, Prague, 1998), Paper NS22.

Scholl, I.

C. Palm, I. Scholl, T. Lehmann, K. Spitzer, “Quantitative color measurement in laryngoscopic images,” in POSTER98 (Faculty of Electrical Engineering, Czech Technical University, Prague, 1998), Paper NS22.

Shafer, S. A.

S. A. Shafer, “Using color to separate reflection components,” Color Res. Appl. 10, 210–218 (1985).
[CrossRef]

C. L. Novak, S. A. Shafer, “Supervised color constancy using a color chart,” (School of Computer Science, Carnegie Mellon University, Pittsburgh, Pa., 1990).

Shevell, S. K.

Slater, D.

Spitzer, K.

C. Palm, I. Scholl, T. Lehmann, K. Spitzer, “Quantitative color measurement in laryngoscopic images,” in POSTER98 (Faculty of Electrical Engineering, Czech Technical University, Prague, 1998), Paper NS22.

H. Hassan, J. Ilgner, C. Palm, T. Lehmann, K. Spitzer, M. Westhofen, “Objective judgement in laryngoscopic images,” in Advances in Quantitative Laryngoscopy, Voice and Speech Research, T. Lehmann, C. Palm, K. Spitzer, T. Tolxdorff, eds. (RWTH, Aachen, Germany, 1998), pp. 135–142.

Tian, Y.-L.

Y.-L. Tian, H. T. Tsui, “Shape from shading for non-Lambertian surfaces from one color image,” in Proceedings of the 13th International Conference on Pattern Recognition (IEEE Computer Society, Los Alamitos, Calif., 1996), Vol. 1, pp. 258–262.

Tominaga, S.

Tsui, H. T.

Y.-L. Tian, H. T. Tsui, “Shape from shading for non-Lambertian surfaces from one color image,” in Proceedings of the 13th International Conference on Pattern Recognition (IEEE Computer Society, Los Alamitos, Calif., 1996), Vol. 1, pp. 258–262.

Uchikawa, K.

Wandell, B. A.

Westhofen, M.

H. Hassan, J. Ilgner, C. Palm, T. Lehmann, K. Spitzer, M. Westhofen, “Objective judgement in laryngoscopic images,” in Advances in Quantitative Laryngoscopy, Voice and Speech Research, T. Lehmann, C. Palm, K. Spitzer, T. Tolxdorff, eds. (RWTH, Aachen, Germany, 1998), pp. 135–142.

Color Res. Appl. (1)

S. A. Shafer, “Using color to separate reflection components,” Color Res. Appl. 10, 210–218 (1985).
[CrossRef]

IEEE Trans. Pattern Anal. Mach. Intell. (4)

J. Ho, B. V. Funt, M. S. Drew, “Separating a color signal into illumination and surface reflectance components: theory and applications,” IEEE Trans. Pattern Anal. Mach. Intell. 12, 966–977 (1990).
[CrossRef]

S. Tominaga, “Surface identification using the dichromatic reflection model,” IEEE Trans. Pattern Anal. Mach. Intell. 13, 658–670 (1991).
[CrossRef]

B. V. Funt, G. D. Finlayson, “Color constant color indexing,” IEEE Trans. Pattern Anal. Mach. Intell. 17, 522–533 (1995).
[CrossRef]

B. V. Funt, M. S. Drew, “Color space analysis of mutual illumination,” IEEE Trans. Pattern Anal. Mach. Intell. 15, 1319–1326 (1993).
[CrossRef]

J. Franklin Inst. (1)

G. Buchsbaum, “A spatial processor model for object colour perception,” J. Franklin Inst. 300, 1–26 (1980).
[CrossRef]

J. Opt. Soc. Am. A (14)

M. D’Zmura, G. Iverson, “Color constancy. III. General linear recovery of spectral descriptions for lights and surfaces,” J. Opt. Soc. Am. A 11, 2389–2400 (1994).
[CrossRef]

G. Healey, D. Slater, “Global color constancy: recognition of objects by use of illumination-invariant properties of color distributions,” J. Opt. Soc. Am. A 11, 3003–3010 (1994).
[CrossRef]

K.-H. Bäuml, “Color constancy: the role of image surface in illuminant adjustment,” J. Opt. Soc. Am. A 16, 1521–1530 (1999).
[CrossRef]

D. H. Brainard, “Color constancy in the nearly natural image. 2. Achromatic loci,” J. Opt. Soc. Am. A 15, 307–325 (1998).
[CrossRef]

D. H. Brainard, W. T. Freeman, “Bayesian color constancy,” J. Opt. Soc. Am. A 14, 1393–1411 (1997).
[CrossRef]

L. T. Maloney, B. A. Wandell, “Color constancy: a method for recovering surface spectral reflectances,” J. Opt. Soc. Am. A 3, 29–33 (1986).
[CrossRef] [PubMed]

M. D’Zmura, P. Lennie, “Mechanisms of color constancy,” J. Opt. Soc. Am. A 3, 1662–1672 (1986).
[CrossRef] [PubMed]

H.-C. Lee, “Method for computing scene-illuminant chromaticity from specular highlights,” J. Opt. Soc. Am. A 3, 1694–1699 (1986).
[CrossRef] [PubMed]

S. Tominaga, B. A. Wandell, “Standard surface-reflection model and illuminant estimation,” J. Opt. Soc. Am. A 6, 576–584 (1989).
[CrossRef]

S. Tominaga, B. A. Wandell, “Component estimation of surface spectral reflectance,” J. Opt. Soc. Am. A 7, 312–317 (1990).
[CrossRef]

M. H. Brill, “Image segmentation by object color: a unifying framework and connection to color constancy,” J. Opt. Soc. Am. A 7, 2041–2047 (1990).
[CrossRef] [PubMed]

S. Tominaga, “Multichannel vision system for estimating surface and illuminant functions,” J. Opt. Soc. Am. A 13, 2163–2173 (1996).
[CrossRef]

E. W. Jin, S. K. Shevell, “Color memory and color constancy,” J. Opt. Soc. Am. A 13, 1981–1991 (1996).
[CrossRef]

I. Kuriki, K. Uchikawa, “Limitations of surface-color and apparent-color constancy,” J. Opt. Soc. Am. A 13, 1622–1636 (1996).
[CrossRef]

Vision Res. (1)

E. H. Land, “Recent advances in retinex theory,” Vision Res. 26, 7–21 (1986).
[CrossRef] [PubMed]

Other (6)

V. F. Leavers, Shape Detection in Computer Vision Using the Hough Transform (Springer-Verlag, Berlin, 1992).

Y.-L. Tian, H. T. Tsui, “Shape from shading for non-Lambertian surfaces from one color image,” in Proceedings of the 13th International Conference on Pattern Recognition (IEEE Computer Society, Los Alamitos, Calif., 1996), Vol. 1, pp. 258–262.

B. Funt, K. Barnard, L. Martin, “Is machine colour constancy good enough?” in Proceedings of the 5th European Conference on Computer Vision (Springer-Verlag, Berlin, 1998), Vol. 1, pp. 445–459.

C. L. Novak, S. A. Shafer, “Supervised color constancy using a color chart,” (School of Computer Science, Carnegie Mellon University, Pittsburgh, Pa., 1990).

C. Palm, I. Scholl, T. Lehmann, K. Spitzer, “Quantitative color measurement in laryngoscopic images,” in POSTER98 (Faculty of Electrical Engineering, Czech Technical University, Prague, 1998), Paper NS22.

H. Hassan, J. Ilgner, C. Palm, T. Lehmann, K. Spitzer, M. Westhofen, “Objective judgement in laryngoscopic images,” in Advances in Quantitative Laryngoscopy, Voice and Speech Research, T. Lehmann, C. Palm, K. Spitzer, T. Tolxdorff, eds. (RWTH, Aachen, Germany, 1998), pp. 135–142.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1

(a) The color edges obtained by the Laplacian-of-Gaussian within each band (512×512 pixels) of the standard image “peppers” are marked in blue. They are used as starting points for the IPS algorithm. (b) Only 11 paths remain for minimal lengths of 7 and 10 pixels at the ascending and descending parts of each path, respectively. (c) The intersection of corresponding color lines within the rg diagram determines the scene-illuminant chromaticity.

Fig. 2
Fig. 2

(a) Two regions of interest (ROIs) are emphasized in the peppers standard image. The green pepper shows a microtextured surface, whereas the red one’s is rather homogeneous. (b) The high densities of the rg diagram obtained from the entire image do not allow the identification of individual straight lines. (c) In contrast, such lines can be seen in the zoomed rg diagrams obtained only from the ROIs. The three best lines have been determined by a Hough transform.

Fig. 3
Fig. 3

(a) Regions of specular highlights that have been determined by thresholding saturation and brightness are marked in the peppers standard image. (c) With a CLS applied, 13 valid color lines have been detected. Note that all lines meet in a sharply focused point of intersection. The corresponding image paths that are built to check the consistency are displayed in (b).

Fig. 4
Fig. 4

The CLS method of illuminant estimation in real-world scenes is visualized in pseudocode. The major parts of the algorithm—ROI detection, color line proposals, consistency check, and intersection estimation—are represented in lines 1–4, 5–9, 10–29, and 31–32, respectively.

Fig. 5
Fig. 5

(a) Arrangements of fruit and vegetables have been analyzed by (b) an IPS and (c) a CLS.

Fig. 6
Fig. 6

According to Table 1, the illuminant chromaticity of the white lamp is determined from the N{3,7,10} most reliable test images. Data obtained from unclipped and clipped capturing are denoted by crosses and rhombs, respectively. The upper and lower rows correspond to the CLS and IPS methods, respectively. Note that CLS clusters are substantially smaller.

Tables (1)

Tables Icon

Table 1 Mean Coordinates of Color Line Intersection pˆr and pˆg and Their Standard Deviations σr and σg for Each Scene Illuminator and Each Algorithm, Based on N{3,7,10} Images with Best Quality, Where the Robustness is Indicated by the Mean Coefficient of Determination ρˆ2 and Its Corresponding Variance σρ

Equations (33)

Equations on this page are rendered with MathJax. Learn more.

RGB=wsRsGsBs+wbRbGbBb.
RGB=wsβsλs(λ)r¯(λ)dλλs(λ)g¯(λ)dλλs(λ)b¯(λ)dλ+wbλβb(λ)s(λ)r¯(λ)dλλβb(λ)s(λ)g¯(λ)dλλβb(λ)s(λ)b¯(λ)dλ,
RGB=wsβssRsGsB+wbβbRsRβbGsGβbBsB.
BR=a GR+b,
a=sBsG βbB-βbRβbG-βbR,b=sBsR βbG-βbBβbG-βbR
r=RR+G+B,g=GR+G+B.
g=ar+b, a=-1-sB(βbB-βbG)sR(βbG-βbR)1+sB(βbB-βbR)sG(βbG-βbR),b=11+sB(βbB-βbR)sG(βbG-βbR)
pr=sRsR+sG+sB,pg=sGsR+sG+sB
I=R+G+B3>12Imax,
S=1-min(R, G, B)I<12Smax,
b=-pra+pg.
aas-α2aas<ades<aas+α2αas,
bas-β2bas<bdes<bas+β2bas
R=wsβssR+wbβbRsR,
G=wsβssG+wbβbGsG,
B=wsβssB+wbβbBsB,
BR=wsβssB+wbβbBsBwsβssR+wbβbRsR.
BR=(wsβssB+wbβbBsB)(βbG-βbR)sG(wsβssR+wbβbRsR)(βbG-βbR)sG=wsβsβbGsGsB+wbβbGβbBsGsB-wsβsβbRsGsB-wbβbRβbBsGsB(wsβssR+wbβbRsR)(βbG-βbR)sG.
BR=(wsβssG+wbβbGsG)(βbB-βbR)sB(wsβssR+wbβbRsR)(βbG-βbR)sG+(wsβsβbG-wsβsβbB+wbβbRβbG-wbβbRβbB)sGsB(wsβssR+wbβbRsR)(βbG-βbR)sG.
BR=(wsβssG+wbβbGsG)(βbB-βbR)sB(wsβssR+wbβbRsR)(βbG-βbR)sG+(wsβssR+wbβbRsR)(βbG-βbB)sB(wsβssR+wbβbRsR)(βbG-βbR)sR.
BR=(βbB-βbR)sB(βbG-βbR)sG (wsβssG+wbβbGsG)(wsβssR+wbβbRsR)+(βbG-βbB)sB(βbG-βbR)sR=a GR+b
a=sBsG βbB-βbRβbG-βbR,b=sBsR βbG-βbBβbG-βbR.
GR+G+B=wsβssG+wbβbGsGR+G+B.
GR+G+B=(wsβssG+wbβbGsG)(βbBsB-βbRsB+βbGsG-βbRsG)sR(R+G+B)(βbBsB-βbRsB+βbGsG-βbRsG)sR=(βbB-βbG)sGsB(wsβssR+wbβbRsR)(R+G+B)(βbBsB-βbRsB+βbGsG-βbRsG)sR+(βbG-βsR)sRsG[(wsβssG+wbβbGsG)+(wsβssB+wbβbBsB)](R+G+B)(βbBsB-βbRsB+βbGsG-βbRsG)sR.
GR+G+B=sRsG[(βbG-βbR)(G+B)+sBsR(βbB-βbG)R](R+G+B)(βbBsB-βbRsB+βbGsG-βbRsG)sR.
GR+G+B=(R+G+B)+-1+sB(βbB-βbG)sR(βbG-βbR)R(R+G+B)βbBsB-βbRsB+βbGsG-βbRsGsG(βbG-βbR).
GR+G+B=-1+sB(βbB-βbG)sR(βbG-βbR)R(R+G+B)1+sB(βbB-βbR)sG(βbG-βbR)+11+sB(βbB-βbR)sG(βbG-βbR)=a RR+G+B+b
a=-1-sB(βbB-βbG)sR(βbG-βbR)1+sB(βbB-βbR)sG(βbG-βbR), b=11+sB(βbB-βbR)sG(βbG-βbR).
pg=apr+b.
pg=sR-1+sB(βbB-βbG)sR(βbG-βbR)(sR+sG+sB)1+sB(βbB-βbR)sG(βbG-βbR)+11+sB(βbB-βbR)sG(βbG-βbR).
pg=-1+sB(βbB-βbG)sR(βbG-βbR)sR+(sR+sG+sB)(sR+sG+sB)βbBsB-βbRsB+βbGsG-βbRsGsG(βbG-βbR).
pg=sG[-sR(βbG-βbR)+sB(βbB-βbG)]+(βbG-βbR)(sR+sG+sB)(sR+sG+sB)(βbBsB-βbRsB+βbGsG-βbRsG).
pg=sGsR+sG+sB -βbGsR+βbRsR+βbBsB-βbGsB+βbGsR+βbGsG+βbGsB-βbRsR-βbRsG-βbRsBβbBsB-βbRsB+βbGsG-βbRsG=sGsR+sG+sB βbBsB+βbGsG-βbRsG-βbRsBβbBsB-βbRsB+βbGsG-βbRsG=sGsR+sG+sB.

Metrics