Abstract

Illuminant direction estimation is an important research issue in the field of image processing. Due to low cost for getting texture information from a single image, it is worthwhile to estimate illuminant direction by employing scenario texture information. This paper proposes a novel computation method to estimate illuminant direction on both color outdoor images and the extended Yale face database B. In our paper, the luminance component is separated from the resized YCbCr image and its edges are detected with the Canny edge detector. Then, we divide the binary edge image into 16 local regions and calculate the edge level percentage in each of them. Afterward, we use the edge level percentage to analyze the complexity of each local region included in the luminance component. Finally, according to the error function between the measured intensity and the calculated intensity, and the constraint function for an infinite light source model, we calculate the illuminant directions of the luminance component’s three local regions, which meet the requirements of lower complexity and larger average gray value, and synthesize them as the final illuminant direction. Unlike previous works, the proposed method requires neither all of the information of the image nor the texture that is included in the training set. Experimental results show that the proposed method works better at the correct rate and execution time than the existing ones.

© 2014 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. S. Bianco, A. Bruna, F. Naccari, and R. Schettini, “Color space transformations for digital photography exploiting information about the illuminant estimation process,” J. Opt. Soc. Am. A 29, 374–384 (2012).
    [CrossRef]
  2. S. Klammt, A. Neyer, and H. Müller, “Microoptics for efficient redirection of sunlight,” Appl. Opt. 51, 2051–2056 (2012).
    [CrossRef]
  3. S. Tominaga and T. Horiuchi, “Spectral imaging by synchronizing capture and illumination,” J. Opt. Soc. Am. A 29, 1764–1775 (2012).
    [CrossRef]
  4. B. Bringier, A. Bony, and M. Khoudeir, “Specularity and shadow detection for the multisource photometric reconstruction of a textured surface,” J. Opt. Soc. Am. A 29, 11–21 (2012).
    [CrossRef]
  5. H. L. Shen and Q. Y. Cai, “Simple and efficient method for specularity removal in an image,” Appl. Opt. 48, 2711–2719 (2009).
    [CrossRef]
  6. V. Diaz-Ramirez and V. Kober, “Target recognition under nonuniform illumination conditions,” Appl. Opt. 48, 1408–1418 (2009).
    [CrossRef]
  7. S. Karlsson, S. Pont, and J. Koenderink, “Illuminance flow over anisotropic surfaces,” J. Opt. Soc. Am. A 25, 282–291 (2008).
    [CrossRef]
  8. Y. F. Zhang and Y. H. Yang, “Multiple illuminant direction detection with application to image synthesis,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 915–920 (2001).
    [CrossRef]
  9. C. S. Bouganis and M. Brookes, “Multiple light source detection,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 509–514 (2004).
    [CrossRef]
  10. W. Zhou and C. Kambhamettu, “A unified framework for scene illuminant estimation,” Image Vis. Comput. 26, 415–429 (2008).
    [CrossRef]
  11. I. Sato, Y. Sato, and K. Ikeuchi, “Illumination distribution from shadows,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June1999, pp. 306–312.
  12. T. Kim and K. S. Hong, “A practical single image based approach for estimating illumination distribution from shadows,” in Proceedings of the IEEE International Conference on Computer Vision, October2005, pp. 266–271.
  13. S. Y. Cho and T. W. S. Chow, “Neural computation approach for developing a 3-D shape reconstruction model,” IEEE Trans. Neural Netw. 12, 1204–1214 (2001).
    [CrossRef]
  14. C. K. Chow and S. Y. Yuen, “Illumination direction estimation for augmented reality using a surface input real valued output regression network,” Pattern Recogn. 43, 1700–1716 (2010).
    [CrossRef]
  15. M. Chantler, M. Petrou, A. Penirsche, M. Schmidt, and G. MGunnigle, “Classifying surface texture while simultaneously estimating illumination direction,” Int. J. Comput. Vis. 62, 83–96 (2005).
  16. Q. F. Zheng and R. Chellappa, “Estimation of illuminant direction, albedo, and shape from shading,” IEEE Trans. Pattern Anal. Mach. Intell. 13, 680–702 (1991).
    [CrossRef]
  17. K. Hara, K. Nishino, and K. Ikeuchi, “Light source position and reflectance estimation from a single view without the distant illumination assumption,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 493–505 (2005).
    [CrossRef]
  18. A. P. Pentland, “Local shading analysis,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-6, 170–187 (1984).
    [CrossRef]
  19. J. Yang, Z. P. Deng, Y. K. Guo, and J. G. Li, “Two new approaches for illuminant direction estimation,” J. Shanghai Jiaotong Univ. 36, 894–896 (2002).
  20. H. Lee, H. Choi, B. Lee, S. Park, and B. Kang, “One dimensional conversion of color temperature in perceived illumination,” IEEE Trans. Consum. Electron. 47, 340–346 (2001).
  21. G. D. Finlayson, M. S. Drew, and B. V. Funt, “Color constancy: generalized diagonal transforms suffice,” J. Opt. Soc. Am. A 11, 3011–3019 (1994).
    [CrossRef]
  22. T. Zickler, S. P. Mallick, D. J. Kriegman, and P. N. Belhumeur, “Color subspaces as photometric invariants,” Int. J. Comput. Vis. 79, 13–30 (2008).
    [CrossRef]
  23. J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-8, 679–698 (1986).
    [CrossRef]
  24. R. A. Peters and R. N. Strickland, “Image complexity metrics for automatic target recognizers,” in Proceedings of the Automatic Target Recognition System and Technology Conference, October1990, pp. 1–17.
  25. Z. Y. Gao, X. M. Yang, J. M. Gong, and H. Jin, “Research on image complexity description methods,” J. Image Graphics 15, 129–135 (2010).
  26. M. Chacon, L. E. Aguilar, and A. Delgado, “Fuzzy adaptive edge definition based on the complexity of the image,” in Proceedings of the 10th IEEE International Conference on Fuzzy Systems, December2001, pp. 675–678.
  27. M. Chacon, D. Alma, and S. Corral, “Image complexity measure: a human criterion free approach,” in Proceedings of the IEEE Annual Meeting of the North American Fuzzy Information Processing Society, June2005, pp. 241–246.
  28. J. H. Liu, J. F. Yang, and T. Fang, “Color property analysis of remote sensing imagery,” Acta Photon. Sin. 38, 441–447 (2009).
  29. Y. J. Yang, R. C. Zhao, and W. B. Wang, “The detection of shadow region in aerial image,” Signal Process. 18, 228–232 (2002).
  30. Y. Z. Li, J. Hu, S. Z. Niu, X. Z. Meng, and Y. L. Zhu, “Exposing digital image forgeries by detecting inconsistence in light source direction,” J. Beijing Univ. Posts Telecommun. 34, 26–30 (2011).
  31. Y. D. Lv, X. J. Shen, H. P. Chen, and Y. W. Wang, “Blind identification for digital images based on inconsistency of illuminant direction,” J. Jilin Univ. 34, 293–298 (2009).
  32. X. B. Sun, J. Yin, D. H. Li, and B. L. Xiao, “Point in polygon testing based on normal direction,” Opt. Precis. Eng. 16, 1122–1126 (2008).
  33. A. Georghiades, P. Belhumeur, and D. Kriegman, “From few to many: illumination cone models for face recognition under variable lighting and pose,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 643–660 (2001).
    [CrossRef]
  34. K. C. Lee, J. Ho, and D. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 684–698 (2005).
    [CrossRef]
  35. X. K. Wang, X. Mao, and I. Mitsuru, “Human face analysis with nonlinear manifold learning,” J. Electron. Inf. Technol. 33, 2531–2535 (2011).
  36. X. K. Wang, X. Mao, and C. D. Caleanu, “Nonlinear shape-texture manifold learning,” IEICE Trans. Inf. Syst. E93.D, 2016–2019 (2010).
  37. Y. L. Xue, X. Mao, C. D. Caleanu, and S. W. Lv, “Layered fuzzy facial expression generation of virtual agent,” Chin. J. Electron. 19, 69–74 (2010).

2012 (4)

2011 (2)

X. K. Wang, X. Mao, and I. Mitsuru, “Human face analysis with nonlinear manifold learning,” J. Electron. Inf. Technol. 33, 2531–2535 (2011).

Y. Z. Li, J. Hu, S. Z. Niu, X. Z. Meng, and Y. L. Zhu, “Exposing digital image forgeries by detecting inconsistence in light source direction,” J. Beijing Univ. Posts Telecommun. 34, 26–30 (2011).

2010 (4)

C. K. Chow and S. Y. Yuen, “Illumination direction estimation for augmented reality using a surface input real valued output regression network,” Pattern Recogn. 43, 1700–1716 (2010).
[CrossRef]

X. K. Wang, X. Mao, and C. D. Caleanu, “Nonlinear shape-texture manifold learning,” IEICE Trans. Inf. Syst. E93.D, 2016–2019 (2010).

Y. L. Xue, X. Mao, C. D. Caleanu, and S. W. Lv, “Layered fuzzy facial expression generation of virtual agent,” Chin. J. Electron. 19, 69–74 (2010).

Z. Y. Gao, X. M. Yang, J. M. Gong, and H. Jin, “Research on image complexity description methods,” J. Image Graphics 15, 129–135 (2010).

2009 (4)

J. H. Liu, J. F. Yang, and T. Fang, “Color property analysis of remote sensing imagery,” Acta Photon. Sin. 38, 441–447 (2009).

V. Diaz-Ramirez and V. Kober, “Target recognition under nonuniform illumination conditions,” Appl. Opt. 48, 1408–1418 (2009).
[CrossRef]

H. L. Shen and Q. Y. Cai, “Simple and efficient method for specularity removal in an image,” Appl. Opt. 48, 2711–2719 (2009).
[CrossRef]

Y. D. Lv, X. J. Shen, H. P. Chen, and Y. W. Wang, “Blind identification for digital images based on inconsistency of illuminant direction,” J. Jilin Univ. 34, 293–298 (2009).

2008 (4)

X. B. Sun, J. Yin, D. H. Li, and B. L. Xiao, “Point in polygon testing based on normal direction,” Opt. Precis. Eng. 16, 1122–1126 (2008).

T. Zickler, S. P. Mallick, D. J. Kriegman, and P. N. Belhumeur, “Color subspaces as photometric invariants,” Int. J. Comput. Vis. 79, 13–30 (2008).
[CrossRef]

W. Zhou and C. Kambhamettu, “A unified framework for scene illuminant estimation,” Image Vis. Comput. 26, 415–429 (2008).
[CrossRef]

S. Karlsson, S. Pont, and J. Koenderink, “Illuminance flow over anisotropic surfaces,” J. Opt. Soc. Am. A 25, 282–291 (2008).
[CrossRef]

2005 (3)

K. C. Lee, J. Ho, and D. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 684–698 (2005).
[CrossRef]

M. Chantler, M. Petrou, A. Penirsche, M. Schmidt, and G. MGunnigle, “Classifying surface texture while simultaneously estimating illumination direction,” Int. J. Comput. Vis. 62, 83–96 (2005).

K. Hara, K. Nishino, and K. Ikeuchi, “Light source position and reflectance estimation from a single view without the distant illumination assumption,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 493–505 (2005).
[CrossRef]

2004 (1)

C. S. Bouganis and M. Brookes, “Multiple light source detection,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 509–514 (2004).
[CrossRef]

2002 (2)

J. Yang, Z. P. Deng, Y. K. Guo, and J. G. Li, “Two new approaches for illuminant direction estimation,” J. Shanghai Jiaotong Univ. 36, 894–896 (2002).

Y. J. Yang, R. C. Zhao, and W. B. Wang, “The detection of shadow region in aerial image,” Signal Process. 18, 228–232 (2002).

2001 (4)

H. Lee, H. Choi, B. Lee, S. Park, and B. Kang, “One dimensional conversion of color temperature in perceived illumination,” IEEE Trans. Consum. Electron. 47, 340–346 (2001).

A. Georghiades, P. Belhumeur, and D. Kriegman, “From few to many: illumination cone models for face recognition under variable lighting and pose,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 643–660 (2001).
[CrossRef]

Y. F. Zhang and Y. H. Yang, “Multiple illuminant direction detection with application to image synthesis,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 915–920 (2001).
[CrossRef]

S. Y. Cho and T. W. S. Chow, “Neural computation approach for developing a 3-D shape reconstruction model,” IEEE Trans. Neural Netw. 12, 1204–1214 (2001).
[CrossRef]

1994 (1)

1991 (1)

Q. F. Zheng and R. Chellappa, “Estimation of illuminant direction, albedo, and shape from shading,” IEEE Trans. Pattern Anal. Mach. Intell. 13, 680–702 (1991).
[CrossRef]

1986 (1)

J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-8, 679–698 (1986).
[CrossRef]

1984 (1)

A. P. Pentland, “Local shading analysis,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-6, 170–187 (1984).
[CrossRef]

Aguilar, L. E.

M. Chacon, L. E. Aguilar, and A. Delgado, “Fuzzy adaptive edge definition based on the complexity of the image,” in Proceedings of the 10th IEEE International Conference on Fuzzy Systems, December2001, pp. 675–678.

Alma, D.

M. Chacon, D. Alma, and S. Corral, “Image complexity measure: a human criterion free approach,” in Proceedings of the IEEE Annual Meeting of the North American Fuzzy Information Processing Society, June2005, pp. 241–246.

Belhumeur, P.

A. Georghiades, P. Belhumeur, and D. Kriegman, “From few to many: illumination cone models for face recognition under variable lighting and pose,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 643–660 (2001).
[CrossRef]

Belhumeur, P. N.

T. Zickler, S. P. Mallick, D. J. Kriegman, and P. N. Belhumeur, “Color subspaces as photometric invariants,” Int. J. Comput. Vis. 79, 13–30 (2008).
[CrossRef]

Bianco, S.

Bony, A.

Bouganis, C. S.

C. S. Bouganis and M. Brookes, “Multiple light source detection,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 509–514 (2004).
[CrossRef]

Bringier, B.

Brookes, M.

C. S. Bouganis and M. Brookes, “Multiple light source detection,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 509–514 (2004).
[CrossRef]

Bruna, A.

Cai, Q. Y.

Caleanu, C. D.

Y. L. Xue, X. Mao, C. D. Caleanu, and S. W. Lv, “Layered fuzzy facial expression generation of virtual agent,” Chin. J. Electron. 19, 69–74 (2010).

X. K. Wang, X. Mao, and C. D. Caleanu, “Nonlinear shape-texture manifold learning,” IEICE Trans. Inf. Syst. E93.D, 2016–2019 (2010).

Canny, J.

J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-8, 679–698 (1986).
[CrossRef]

Chacon, M.

M. Chacon, L. E. Aguilar, and A. Delgado, “Fuzzy adaptive edge definition based on the complexity of the image,” in Proceedings of the 10th IEEE International Conference on Fuzzy Systems, December2001, pp. 675–678.

M. Chacon, D. Alma, and S. Corral, “Image complexity measure: a human criterion free approach,” in Proceedings of the IEEE Annual Meeting of the North American Fuzzy Information Processing Society, June2005, pp. 241–246.

Chantler, M.

M. Chantler, M. Petrou, A. Penirsche, M. Schmidt, and G. MGunnigle, “Classifying surface texture while simultaneously estimating illumination direction,” Int. J. Comput. Vis. 62, 83–96 (2005).

Chellappa, R.

Q. F. Zheng and R. Chellappa, “Estimation of illuminant direction, albedo, and shape from shading,” IEEE Trans. Pattern Anal. Mach. Intell. 13, 680–702 (1991).
[CrossRef]

Chen, H. P.

Y. D. Lv, X. J. Shen, H. P. Chen, and Y. W. Wang, “Blind identification for digital images based on inconsistency of illuminant direction,” J. Jilin Univ. 34, 293–298 (2009).

Cho, S. Y.

S. Y. Cho and T. W. S. Chow, “Neural computation approach for developing a 3-D shape reconstruction model,” IEEE Trans. Neural Netw. 12, 1204–1214 (2001).
[CrossRef]

Choi, H.

H. Lee, H. Choi, B. Lee, S. Park, and B. Kang, “One dimensional conversion of color temperature in perceived illumination,” IEEE Trans. Consum. Electron. 47, 340–346 (2001).

Chow, C. K.

C. K. Chow and S. Y. Yuen, “Illumination direction estimation for augmented reality using a surface input real valued output regression network,” Pattern Recogn. 43, 1700–1716 (2010).
[CrossRef]

Chow, T. W. S.

S. Y. Cho and T. W. S. Chow, “Neural computation approach for developing a 3-D shape reconstruction model,” IEEE Trans. Neural Netw. 12, 1204–1214 (2001).
[CrossRef]

Corral, S.

M. Chacon, D. Alma, and S. Corral, “Image complexity measure: a human criterion free approach,” in Proceedings of the IEEE Annual Meeting of the North American Fuzzy Information Processing Society, June2005, pp. 241–246.

Delgado, A.

M. Chacon, L. E. Aguilar, and A. Delgado, “Fuzzy adaptive edge definition based on the complexity of the image,” in Proceedings of the 10th IEEE International Conference on Fuzzy Systems, December2001, pp. 675–678.

Deng, Z. P.

J. Yang, Z. P. Deng, Y. K. Guo, and J. G. Li, “Two new approaches for illuminant direction estimation,” J. Shanghai Jiaotong Univ. 36, 894–896 (2002).

Diaz-Ramirez, V.

Drew, M. S.

Fang, T.

J. H. Liu, J. F. Yang, and T. Fang, “Color property analysis of remote sensing imagery,” Acta Photon. Sin. 38, 441–447 (2009).

Finlayson, G. D.

Funt, B. V.

Gao, Z. Y.

Z. Y. Gao, X. M. Yang, J. M. Gong, and H. Jin, “Research on image complexity description methods,” J. Image Graphics 15, 129–135 (2010).

Georghiades, A.

A. Georghiades, P. Belhumeur, and D. Kriegman, “From few to many: illumination cone models for face recognition under variable lighting and pose,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 643–660 (2001).
[CrossRef]

Gong, J. M.

Z. Y. Gao, X. M. Yang, J. M. Gong, and H. Jin, “Research on image complexity description methods,” J. Image Graphics 15, 129–135 (2010).

Guo, Y. K.

J. Yang, Z. P. Deng, Y. K. Guo, and J. G. Li, “Two new approaches for illuminant direction estimation,” J. Shanghai Jiaotong Univ. 36, 894–896 (2002).

Hara, K.

K. Hara, K. Nishino, and K. Ikeuchi, “Light source position and reflectance estimation from a single view without the distant illumination assumption,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 493–505 (2005).
[CrossRef]

Ho, J.

K. C. Lee, J. Ho, and D. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 684–698 (2005).
[CrossRef]

Hong, K. S.

T. Kim and K. S. Hong, “A practical single image based approach for estimating illumination distribution from shadows,” in Proceedings of the IEEE International Conference on Computer Vision, October2005, pp. 266–271.

Horiuchi, T.

Hu, J.

Y. Z. Li, J. Hu, S. Z. Niu, X. Z. Meng, and Y. L. Zhu, “Exposing digital image forgeries by detecting inconsistence in light source direction,” J. Beijing Univ. Posts Telecommun. 34, 26–30 (2011).

Ikeuchi, K.

K. Hara, K. Nishino, and K. Ikeuchi, “Light source position and reflectance estimation from a single view without the distant illumination assumption,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 493–505 (2005).
[CrossRef]

I. Sato, Y. Sato, and K. Ikeuchi, “Illumination distribution from shadows,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June1999, pp. 306–312.

Jin, H.

Z. Y. Gao, X. M. Yang, J. M. Gong, and H. Jin, “Research on image complexity description methods,” J. Image Graphics 15, 129–135 (2010).

Kambhamettu, C.

W. Zhou and C. Kambhamettu, “A unified framework for scene illuminant estimation,” Image Vis. Comput. 26, 415–429 (2008).
[CrossRef]

Kang, B.

H. Lee, H. Choi, B. Lee, S. Park, and B. Kang, “One dimensional conversion of color temperature in perceived illumination,” IEEE Trans. Consum. Electron. 47, 340–346 (2001).

Karlsson, S.

Khoudeir, M.

Kim, T.

T. Kim and K. S. Hong, “A practical single image based approach for estimating illumination distribution from shadows,” in Proceedings of the IEEE International Conference on Computer Vision, October2005, pp. 266–271.

Klammt, S.

Kober, V.

Koenderink, J.

Kriegman, D.

K. C. Lee, J. Ho, and D. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 684–698 (2005).
[CrossRef]

A. Georghiades, P. Belhumeur, and D. Kriegman, “From few to many: illumination cone models for face recognition under variable lighting and pose,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 643–660 (2001).
[CrossRef]

Kriegman, D. J.

T. Zickler, S. P. Mallick, D. J. Kriegman, and P. N. Belhumeur, “Color subspaces as photometric invariants,” Int. J. Comput. Vis. 79, 13–30 (2008).
[CrossRef]

Lee, B.

H. Lee, H. Choi, B. Lee, S. Park, and B. Kang, “One dimensional conversion of color temperature in perceived illumination,” IEEE Trans. Consum. Electron. 47, 340–346 (2001).

Lee, H.

H. Lee, H. Choi, B. Lee, S. Park, and B. Kang, “One dimensional conversion of color temperature in perceived illumination,” IEEE Trans. Consum. Electron. 47, 340–346 (2001).

Lee, K. C.

K. C. Lee, J. Ho, and D. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 684–698 (2005).
[CrossRef]

Li, D. H.

X. B. Sun, J. Yin, D. H. Li, and B. L. Xiao, “Point in polygon testing based on normal direction,” Opt. Precis. Eng. 16, 1122–1126 (2008).

Li, J. G.

J. Yang, Z. P. Deng, Y. K. Guo, and J. G. Li, “Two new approaches for illuminant direction estimation,” J. Shanghai Jiaotong Univ. 36, 894–896 (2002).

Li, Y. Z.

Y. Z. Li, J. Hu, S. Z. Niu, X. Z. Meng, and Y. L. Zhu, “Exposing digital image forgeries by detecting inconsistence in light source direction,” J. Beijing Univ. Posts Telecommun. 34, 26–30 (2011).

Liu, J. H.

J. H. Liu, J. F. Yang, and T. Fang, “Color property analysis of remote sensing imagery,” Acta Photon. Sin. 38, 441–447 (2009).

Lv, S. W.

Y. L. Xue, X. Mao, C. D. Caleanu, and S. W. Lv, “Layered fuzzy facial expression generation of virtual agent,” Chin. J. Electron. 19, 69–74 (2010).

Lv, Y. D.

Y. D. Lv, X. J. Shen, H. P. Chen, and Y. W. Wang, “Blind identification for digital images based on inconsistency of illuminant direction,” J. Jilin Univ. 34, 293–298 (2009).

Mallick, S. P.

T. Zickler, S. P. Mallick, D. J. Kriegman, and P. N. Belhumeur, “Color subspaces as photometric invariants,” Int. J. Comput. Vis. 79, 13–30 (2008).
[CrossRef]

Mao, X.

X. K. Wang, X. Mao, and I. Mitsuru, “Human face analysis with nonlinear manifold learning,” J. Electron. Inf. Technol. 33, 2531–2535 (2011).

Y. L. Xue, X. Mao, C. D. Caleanu, and S. W. Lv, “Layered fuzzy facial expression generation of virtual agent,” Chin. J. Electron. 19, 69–74 (2010).

X. K. Wang, X. Mao, and C. D. Caleanu, “Nonlinear shape-texture manifold learning,” IEICE Trans. Inf. Syst. E93.D, 2016–2019 (2010).

Meng, X. Z.

Y. Z. Li, J. Hu, S. Z. Niu, X. Z. Meng, and Y. L. Zhu, “Exposing digital image forgeries by detecting inconsistence in light source direction,” J. Beijing Univ. Posts Telecommun. 34, 26–30 (2011).

MGunnigle, G.

M. Chantler, M. Petrou, A. Penirsche, M. Schmidt, and G. MGunnigle, “Classifying surface texture while simultaneously estimating illumination direction,” Int. J. Comput. Vis. 62, 83–96 (2005).

Mitsuru, I.

X. K. Wang, X. Mao, and I. Mitsuru, “Human face analysis with nonlinear manifold learning,” J. Electron. Inf. Technol. 33, 2531–2535 (2011).

Müller, H.

Naccari, F.

Neyer, A.

Nishino, K.

K. Hara, K. Nishino, and K. Ikeuchi, “Light source position and reflectance estimation from a single view without the distant illumination assumption,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 493–505 (2005).
[CrossRef]

Niu, S. Z.

Y. Z. Li, J. Hu, S. Z. Niu, X. Z. Meng, and Y. L. Zhu, “Exposing digital image forgeries by detecting inconsistence in light source direction,” J. Beijing Univ. Posts Telecommun. 34, 26–30 (2011).

Park, S.

H. Lee, H. Choi, B. Lee, S. Park, and B. Kang, “One dimensional conversion of color temperature in perceived illumination,” IEEE Trans. Consum. Electron. 47, 340–346 (2001).

Penirsche, A.

M. Chantler, M. Petrou, A. Penirsche, M. Schmidt, and G. MGunnigle, “Classifying surface texture while simultaneously estimating illumination direction,” Int. J. Comput. Vis. 62, 83–96 (2005).

Pentland, A. P.

A. P. Pentland, “Local shading analysis,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-6, 170–187 (1984).
[CrossRef]

Peters, R. A.

R. A. Peters and R. N. Strickland, “Image complexity metrics for automatic target recognizers,” in Proceedings of the Automatic Target Recognition System and Technology Conference, October1990, pp. 1–17.

Petrou, M.

M. Chantler, M. Petrou, A. Penirsche, M. Schmidt, and G. MGunnigle, “Classifying surface texture while simultaneously estimating illumination direction,” Int. J. Comput. Vis. 62, 83–96 (2005).

Pont, S.

Sato, I.

I. Sato, Y. Sato, and K. Ikeuchi, “Illumination distribution from shadows,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June1999, pp. 306–312.

Sato, Y.

I. Sato, Y. Sato, and K. Ikeuchi, “Illumination distribution from shadows,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June1999, pp. 306–312.

Schettini, R.

Schmidt, M.

M. Chantler, M. Petrou, A. Penirsche, M. Schmidt, and G. MGunnigle, “Classifying surface texture while simultaneously estimating illumination direction,” Int. J. Comput. Vis. 62, 83–96 (2005).

Shen, H. L.

Shen, X. J.

Y. D. Lv, X. J. Shen, H. P. Chen, and Y. W. Wang, “Blind identification for digital images based on inconsistency of illuminant direction,” J. Jilin Univ. 34, 293–298 (2009).

Strickland, R. N.

R. A. Peters and R. N. Strickland, “Image complexity metrics for automatic target recognizers,” in Proceedings of the Automatic Target Recognition System and Technology Conference, October1990, pp. 1–17.

Sun, X. B.

X. B. Sun, J. Yin, D. H. Li, and B. L. Xiao, “Point in polygon testing based on normal direction,” Opt. Precis. Eng. 16, 1122–1126 (2008).

Tominaga, S.

Wang, W. B.

Y. J. Yang, R. C. Zhao, and W. B. Wang, “The detection of shadow region in aerial image,” Signal Process. 18, 228–232 (2002).

Wang, X. K.

X. K. Wang, X. Mao, and I. Mitsuru, “Human face analysis with nonlinear manifold learning,” J. Electron. Inf. Technol. 33, 2531–2535 (2011).

X. K. Wang, X. Mao, and C. D. Caleanu, “Nonlinear shape-texture manifold learning,” IEICE Trans. Inf. Syst. E93.D, 2016–2019 (2010).

Wang, Y. W.

Y. D. Lv, X. J. Shen, H. P. Chen, and Y. W. Wang, “Blind identification for digital images based on inconsistency of illuminant direction,” J. Jilin Univ. 34, 293–298 (2009).

Xiao, B. L.

X. B. Sun, J. Yin, D. H. Li, and B. L. Xiao, “Point in polygon testing based on normal direction,” Opt. Precis. Eng. 16, 1122–1126 (2008).

Xue, Y. L.

Y. L. Xue, X. Mao, C. D. Caleanu, and S. W. Lv, “Layered fuzzy facial expression generation of virtual agent,” Chin. J. Electron. 19, 69–74 (2010).

Yang, J.

J. Yang, Z. P. Deng, Y. K. Guo, and J. G. Li, “Two new approaches for illuminant direction estimation,” J. Shanghai Jiaotong Univ. 36, 894–896 (2002).

Yang, J. F.

J. H. Liu, J. F. Yang, and T. Fang, “Color property analysis of remote sensing imagery,” Acta Photon. Sin. 38, 441–447 (2009).

Yang, X. M.

Z. Y. Gao, X. M. Yang, J. M. Gong, and H. Jin, “Research on image complexity description methods,” J. Image Graphics 15, 129–135 (2010).

Yang, Y. H.

Y. F. Zhang and Y. H. Yang, “Multiple illuminant direction detection with application to image synthesis,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 915–920 (2001).
[CrossRef]

Yang, Y. J.

Y. J. Yang, R. C. Zhao, and W. B. Wang, “The detection of shadow region in aerial image,” Signal Process. 18, 228–232 (2002).

Yin, J.

X. B. Sun, J. Yin, D. H. Li, and B. L. Xiao, “Point in polygon testing based on normal direction,” Opt. Precis. Eng. 16, 1122–1126 (2008).

Yuen, S. Y.

C. K. Chow and S. Y. Yuen, “Illumination direction estimation for augmented reality using a surface input real valued output regression network,” Pattern Recogn. 43, 1700–1716 (2010).
[CrossRef]

Zhang, Y. F.

Y. F. Zhang and Y. H. Yang, “Multiple illuminant direction detection with application to image synthesis,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 915–920 (2001).
[CrossRef]

Zhao, R. C.

Y. J. Yang, R. C. Zhao, and W. B. Wang, “The detection of shadow region in aerial image,” Signal Process. 18, 228–232 (2002).

Zheng, Q. F.

Q. F. Zheng and R. Chellappa, “Estimation of illuminant direction, albedo, and shape from shading,” IEEE Trans. Pattern Anal. Mach. Intell. 13, 680–702 (1991).
[CrossRef]

Zhou, W.

W. Zhou and C. Kambhamettu, “A unified framework for scene illuminant estimation,” Image Vis. Comput. 26, 415–429 (2008).
[CrossRef]

Zhu, Y. L.

Y. Z. Li, J. Hu, S. Z. Niu, X. Z. Meng, and Y. L. Zhu, “Exposing digital image forgeries by detecting inconsistence in light source direction,” J. Beijing Univ. Posts Telecommun. 34, 26–30 (2011).

Zickler, T.

T. Zickler, S. P. Mallick, D. J. Kriegman, and P. N. Belhumeur, “Color subspaces as photometric invariants,” Int. J. Comput. Vis. 79, 13–30 (2008).
[CrossRef]

Acta Photon. Sin. (1)

J. H. Liu, J. F. Yang, and T. Fang, “Color property analysis of remote sensing imagery,” Acta Photon. Sin. 38, 441–447 (2009).

Appl. Opt. (3)

Chin. J. Electron. (1)

Y. L. Xue, X. Mao, C. D. Caleanu, and S. W. Lv, “Layered fuzzy facial expression generation of virtual agent,” Chin. J. Electron. 19, 69–74 (2010).

IEEE Trans. Consum. Electron. (1)

H. Lee, H. Choi, B. Lee, S. Park, and B. Kang, “One dimensional conversion of color temperature in perceived illumination,” IEEE Trans. Consum. Electron. 47, 340–346 (2001).

IEEE Trans. Neural Netw. (1)

S. Y. Cho and T. W. S. Chow, “Neural computation approach for developing a 3-D shape reconstruction model,” IEEE Trans. Neural Netw. 12, 1204–1214 (2001).
[CrossRef]

IEEE Trans. Pattern Anal. Mach. Intell. (8)

Q. F. Zheng and R. Chellappa, “Estimation of illuminant direction, albedo, and shape from shading,” IEEE Trans. Pattern Anal. Mach. Intell. 13, 680–702 (1991).
[CrossRef]

K. Hara, K. Nishino, and K. Ikeuchi, “Light source position and reflectance estimation from a single view without the distant illumination assumption,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 493–505 (2005).
[CrossRef]

A. P. Pentland, “Local shading analysis,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-6, 170–187 (1984).
[CrossRef]

Y. F. Zhang and Y. H. Yang, “Multiple illuminant direction detection with application to image synthesis,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 915–920 (2001).
[CrossRef]

C. S. Bouganis and M. Brookes, “Multiple light source detection,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 509–514 (2004).
[CrossRef]

A. Georghiades, P. Belhumeur, and D. Kriegman, “From few to many: illumination cone models for face recognition under variable lighting and pose,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 643–660 (2001).
[CrossRef]

K. C. Lee, J. Ho, and D. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 684–698 (2005).
[CrossRef]

J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-8, 679–698 (1986).
[CrossRef]

IEICE Trans. Inf. Syst. (1)

X. K. Wang, X. Mao, and C. D. Caleanu, “Nonlinear shape-texture manifold learning,” IEICE Trans. Inf. Syst. E93.D, 2016–2019 (2010).

Image Vis. Comput. (1)

W. Zhou and C. Kambhamettu, “A unified framework for scene illuminant estimation,” Image Vis. Comput. 26, 415–429 (2008).
[CrossRef]

Int. J. Comput. Vis. (2)

T. Zickler, S. P. Mallick, D. J. Kriegman, and P. N. Belhumeur, “Color subspaces as photometric invariants,” Int. J. Comput. Vis. 79, 13–30 (2008).
[CrossRef]

M. Chantler, M. Petrou, A. Penirsche, M. Schmidt, and G. MGunnigle, “Classifying surface texture while simultaneously estimating illumination direction,” Int. J. Comput. Vis. 62, 83–96 (2005).

J. Beijing Univ. Posts Telecommun. (1)

Y. Z. Li, J. Hu, S. Z. Niu, X. Z. Meng, and Y. L. Zhu, “Exposing digital image forgeries by detecting inconsistence in light source direction,” J. Beijing Univ. Posts Telecommun. 34, 26–30 (2011).

J. Electron. Inf. Technol. (1)

X. K. Wang, X. Mao, and I. Mitsuru, “Human face analysis with nonlinear manifold learning,” J. Electron. Inf. Technol. 33, 2531–2535 (2011).

J. Image Graphics (1)

Z. Y. Gao, X. M. Yang, J. M. Gong, and H. Jin, “Research on image complexity description methods,” J. Image Graphics 15, 129–135 (2010).

J. Jilin Univ. (1)

Y. D. Lv, X. J. Shen, H. P. Chen, and Y. W. Wang, “Blind identification for digital images based on inconsistency of illuminant direction,” J. Jilin Univ. 34, 293–298 (2009).

J. Opt. Soc. Am. A (5)

J. Shanghai Jiaotong Univ. (1)

J. Yang, Z. P. Deng, Y. K. Guo, and J. G. Li, “Two new approaches for illuminant direction estimation,” J. Shanghai Jiaotong Univ. 36, 894–896 (2002).

Opt. Precis. Eng. (1)

X. B. Sun, J. Yin, D. H. Li, and B. L. Xiao, “Point in polygon testing based on normal direction,” Opt. Precis. Eng. 16, 1122–1126 (2008).

Pattern Recogn. (1)

C. K. Chow and S. Y. Yuen, “Illumination direction estimation for augmented reality using a surface input real valued output regression network,” Pattern Recogn. 43, 1700–1716 (2010).
[CrossRef]

Signal Process. (1)

Y. J. Yang, R. C. Zhao, and W. B. Wang, “The detection of shadow region in aerial image,” Signal Process. 18, 228–232 (2002).

Other (5)

M. Chacon, L. E. Aguilar, and A. Delgado, “Fuzzy adaptive edge definition based on the complexity of the image,” in Proceedings of the 10th IEEE International Conference on Fuzzy Systems, December2001, pp. 675–678.

M. Chacon, D. Alma, and S. Corral, “Image complexity measure: a human criterion free approach,” in Proceedings of the IEEE Annual Meeting of the North American Fuzzy Information Processing Society, June2005, pp. 241–246.

R. A. Peters and R. N. Strickland, “Image complexity metrics for automatic target recognizers,” in Proceedings of the Automatic Target Recognition System and Technology Conference, October1990, pp. 1–17.

I. Sato, Y. Sato, and K. Ikeuchi, “Illumination distribution from shadows,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June1999, pp. 306–312.

T. Kim and K. S. Hong, “A practical single image based approach for estimating illumination distribution from shadows,” in Proceedings of the IEEE International Conference on Computer Vision, October2005, pp. 266–271.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1.

System architecture of the proposed illuminant direction estimation method.

Fig. 2.
Fig. 2.

Local regions and their edge detection results. It is known that regions with simple edges always have similar textures. (a) Sample 1. (b) Sample 2.

Fig. 3.
Fig. 3.

Some samples and their binary edge images.

Fig. 4.
Fig. 4.

Edge level percentages of some samples. Abscissa represents the serial number of local regions and ordinate shows the value of edge level percentage.

Fig. 5.
Fig. 5.

Average gray values of some samples. Abscissa represents the serial number of local regions and ordinate shows the average gray value.

Fig. 6.
Fig. 6.

Overall flow of region selection. B and C represent the set of first eight values of {ηi} and the set of selected regions, respectively.

Fig. 7.
Fig. 7.

3D Lambertian model and its 2D representation.

Fig. 8.
Fig. 8.

Calculation process for the normal vector. The points on the normal lines are those that have maximal gray-scale difference from the center pixel.

Fig. 9.
Fig. 9.

Angle representation rule of illuminant direction.

Fig. 10.
Fig. 10.

Three different cases of image partition. (a) Case a. (b) Case b. (c) Case c.

Fig. 11.
Fig. 11.

Average correct rate changing with T. Because the cardinality of set B is 8, the maximum value of T is 8.

Fig. 12.
Fig. 12.

Some results of the first experiment. (a) Luminance component Y. (b) Partition results. (c) Selected regions. (d) Final illuminant direction L: direction indicated by dotted arrow.

Fig. 13.
Fig. 13.

Some illuminant direction estimation results. (a) Unclear sky images. (b) Images taken at night.

Fig. 14.
Fig. 14.

Some results of the second experiment. (a) Original images and their real illuminant directions. (b) Partition results. (c) Selected regions. (d) Final illuminant direction L.

Tables (2)

Tables Icon

Table 1. Average Correct Rate rc and Execution Time t of the First Experiment

Tables Icon

Table 2. Average Correct Rate rc and Execution Time t of the Second Experiment

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

ψi=|A|M×N,A={pi(x,y)|pi(x,y)=1},
ψ(FYi(x,y))ψ(FYi+1(x,y)).
f1(Lm,Ln,ka)=[Nx(fYm(x1,y1))Ny(fYm(x1,y1))001Nx(fYm(x1,y2))Ny(fYm(x1,y2))001Nx(fYm(xp,yp))Ny(fYm(xp,yp))00100Nx(fYn(x1,y1))Ny(fYn(x1,y1))100Nx(fYn(x1,y2))Ny(fYn(x1,y2))100Nx(fYn(xp,yp))Ny(fYn(xp,yp))1][LxmLymLxnLynka][fYm(x1,y1)fYm(x1,y2)fYm(xp,yp)fYn(x1,y1)fYn(x1,y2)fYn(xp,yp)]2=Mvb2,
f2(Lm,Ln,ka)=[1010001010][LxmLymLxnLynka]2=Cv2.
f(Lm,Ln,ka)=f1+λf2,
{minf1(Lm,Ln,ka)=Mvb2s.t.f2(Lm,Ln,ka)=Cv2=0.
{f(Lm,Ln,ka)v=2MTMvCTλ2MTb=0Cv=0.
W(m,n)=1ψm+ψn.
L=L^(1,2)+L^(2,3)+L^(3,1)=W(1,2)L(1,2)+W(2,3)L(2,3)+W(3,1)L(3,1).
t(casec)>t(caseb)>t(casea),
rc(caseb)>rc(casec)>rc(casea).

Metrics