Abstract

We propose illumination invariant face recognition and 3D face reconstruction using desktop optics. The computer screen is used as a programmable extended light source to illuminate the face from different directions and acquire images. Features are extracted from these images and projected to multiple linear subspaces in an effort to preserve unique features rather than the most varying ones. Experiments were performed using our database of 4347 images (106 subjects), the extended Yale B and CMU-PIE databases and better results were achieved compared to the existing state-of-the-art. We also propose an efficient algorithm for reconstructing the 3D face models from three images under arbitrary illumination. The subspace coefficients of training faces are used as input patterns to train multiple Support Vector Machines (SVM) where the output labels are the subspace parameters of ground truth 3D face models. Support Vector Regression is used to learn multiple functions that map the input coefficients to the parameters of the 3D face. During testing, three images of an unknown/novel face under arbitrary illumination are used to estimate its 3D model. Quantitative results are presented using our database of 106 subjects and qualitative results are presented on the Yale B database.

© 2011 OSA

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld, “Face recognition: A literature survey,” ACM Comput. Surv. 35(4), 399–458 (2003).
    [CrossRef]
  2. M. Turk and A. Pentland, “Eigenfaces for recognition,” J. Cogn. Neurosci. 3, 71–86 (1991).
    [CrossRef]
  3. P. Belhumeur, J. Hespanha, and D. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 711–720 (1997).
    [CrossRef]
  4. L. Wiskott, J. Fellous, N. Kruger, and C. Malsgurg, “Face recognition by elastic bunch graph matching,” IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 775–779 (1997).
    [CrossRef]
  5. O. Arandjelovic and R. Cipolla, “Face recognition from video using the generic shape-illumination manifold,” in Proceedings of European Conference on Computer Vision (Springer, 2006), pp. 27–40.
  6. K. Lee and D. Kriegman, “Online probabilistic appearance manifolds for video-based recognition and tracking,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 852–859.
  7. L. Liu, Y. Wang, and T. Tan, “Online appearance model learning for video-based face recognition,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–7.
  8. J. Tangelder and B. Schouten, “Learning a sparse representation from multiple still images for on-line face recognition in an unconstrained environment,” in Proceedings of International Conference on Pattern Recognition (IEEE, 2006), pp. 10867–1090.
  9. M. Do and M. Vetterli, “The Contourlet transform: an efficient directional multiresolution image representation,” IEEE Trans. Image Process. 14(12), 2091–2106 (2005).
    [CrossRef] [PubMed]
  10. T. Joachims, “Making large-scale SVM learning practical,” in Advances in Kernel Methods , (MIT-Press, 1999), pp. 169–184.
  11. P. Belhumeur and D. Kriegman, “What is the set of images of an object under all possible illumination conditions?,” Int. J. Comput. Vision 28(3), 245–260 (1998).
    [CrossRef]
  12. A. Georghiades, P. Belhumeur, and D. Kriegman, “From few to many: Illumination cone models for face recognition under variable lighting and pose,” IEEE Trans. Pattern Anal. Mach. Intell. 6(23), 643–660 (2001).
    [CrossRef]
  13. P. Hallinan, “A low-dimensional representation of human faces for arbitrary lighting conditions,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1994), pp. 995–999.
    [CrossRef]
  14. R. Basri and D. Jacobs, “Lambertian reflectance and linear subspaces,” IEEE Trans. Pattern Anal. Mach. Intell. 25(2), 218–233 (2003).
    [CrossRef]
  15. K. Lee, J. Ho, and D. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Trans. Pattern Anal. Mach. Intell. 27(5), 684–698 (2005).
    [CrossRef] [PubMed]
  16. Y. Schechner, S. Nayar, and P. Belhumeur, “A theory of multiplexed illumination,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2003), pp. 808–815.
    [CrossRef]
  17. W. R. Boukabou and A. Bouridane, “Contourlet-based feature extraction with PCA for face recognition,” in Proceedings of NASA/ESA Conference on Adaptive Hardware and Systems (IEEE, 2008), pp. 482–486.
    [CrossRef]
  18. Y. Huang, J. Li, G. Duan, J. Lin, D. Hu, and B. Fu, “Face recognition using illumination invariant features in Contourlet domain,” in Proceedings of International Conference on Apperceiving Computing and Intelligence Analysis (IEEE, 2010), pp. 294–297.
    [CrossRef]
  19. A. Mian, “Face recognition using Contourlet transform and multidirectional illumination from a computer screen,” in Proceedings of Advanced Concepts for Intelligent Vision Systems (Springer, 2010), pp. 332–334.
    [CrossRef]
  20. K. Bowyer, K. Chang, and P. Flynn, “A survey of approaches and challenges in 3D and multi-modal 3D + 2D face recognition,” Comput. Vis. Image Und. 101, 1–15 (2006).
    [CrossRef]
  21. D. Scharstein, R. Szeliski, and R. Zabih, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vision 47, 7–42 (2002).
    [CrossRef]
  22. V. Blanz and T. Vetter, “Face recognition based on fitting a 3D morphable model,” IEEE Trans. Pattern Anal. Mach. Intell. bf 25 , 1063–1074 (2003).
    [CrossRef]
  23. G. Schindler, “Photometric stereo via computer screen lighting for real-time surface reconstruction,” in Proceedings of International Symposium on 3D Data Processing, Visualization and Transmission (IEEE, 2008).
  24. N. Funk and Y. Yang, “Using a raster display for photometric stereo,” in Proceedings of Canadian Conference on Computer and Robot Vision (IEEE, 2007), pp. 201–207.
  25. J. Clark, “Photometric stereo using LCD displays,” Image Vis. Comput. 28(4), 704–714 (2010).
    [CrossRef]
  26. T. Sim, S. Baker, and M. Bsat, “The CMU pose, illumination, and expression database,” IEEE Trans. Pattern Anal. Mach. Intell. 25(12), 1615–1618 (2003).
    [CrossRef]
  27. P. Viola and M. Jones, “Robust real-time face detection,” Int. J. Comput. Vision 57(2), 137–154 (2004).
    [CrossRef]
  28. L. Shen and L. Bai, “A review on Gabor Wavelets for face recognition,” Pattern Anal. Appl. 19, 273–292 (2006).
    [CrossRef]
  29. J. D’Erico, “Surface fitting using Gridfit,” http://www.mathworks.com/matlabcentral/fileexchange/ .
  30. P. Phillips, P. Flynn, T. Scruggs, K. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min, and W. Worek, “Overview of the face recognition grand challenge,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 947–954.
  31. T. Chen, W. Yin, X. Zhou, D. Comaniciu, and T. Huang, “Total variation models for variable lighting face recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 28(9), 1519–1524 (2006).
    [CrossRef] [PubMed]
  32. X. Tan and B. Triggs, “Enhanced local texture feature sets for face recognition under difficult lighting conditions,” in Proceedings of IEEE International Workshop on Analysis and Modeling of Faces and Gestures (IEEE, 2007).
    [CrossRef]

2010 (3)

Y. Huang, J. Li, G. Duan, J. Lin, D. Hu, and B. Fu, “Face recognition using illumination invariant features in Contourlet domain,” in Proceedings of International Conference on Apperceiving Computing and Intelligence Analysis (IEEE, 2010), pp. 294–297.
[CrossRef]

A. Mian, “Face recognition using Contourlet transform and multidirectional illumination from a computer screen,” in Proceedings of Advanced Concepts for Intelligent Vision Systems (Springer, 2010), pp. 332–334.
[CrossRef]

J. Clark, “Photometric stereo using LCD displays,” Image Vis. Comput. 28(4), 704–714 (2010).
[CrossRef]

2008 (2)

G. Schindler, “Photometric stereo via computer screen lighting for real-time surface reconstruction,” in Proceedings of International Symposium on 3D Data Processing, Visualization and Transmission (IEEE, 2008).

W. R. Boukabou and A. Bouridane, “Contourlet-based feature extraction with PCA for face recognition,” in Proceedings of NASA/ESA Conference on Adaptive Hardware and Systems (IEEE, 2008), pp. 482–486.
[CrossRef]

2007 (3)

L. Liu, Y. Wang, and T. Tan, “Online appearance model learning for video-based face recognition,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–7.

N. Funk and Y. Yang, “Using a raster display for photometric stereo,” in Proceedings of Canadian Conference on Computer and Robot Vision (IEEE, 2007), pp. 201–207.

X. Tan and B. Triggs, “Enhanced local texture feature sets for face recognition under difficult lighting conditions,” in Proceedings of IEEE International Workshop on Analysis and Modeling of Faces and Gestures (IEEE, 2007).
[CrossRef]

2006 (5)

T. Chen, W. Yin, X. Zhou, D. Comaniciu, and T. Huang, “Total variation models for variable lighting face recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 28(9), 1519–1524 (2006).
[CrossRef] [PubMed]

L. Shen and L. Bai, “A review on Gabor Wavelets for face recognition,” Pattern Anal. Appl. 19, 273–292 (2006).
[CrossRef]

K. Bowyer, K. Chang, and P. Flynn, “A survey of approaches and challenges in 3D and multi-modal 3D + 2D face recognition,” Comput. Vis. Image Und. 101, 1–15 (2006).
[CrossRef]

J. Tangelder and B. Schouten, “Learning a sparse representation from multiple still images for on-line face recognition in an unconstrained environment,” in Proceedings of International Conference on Pattern Recognition (IEEE, 2006), pp. 10867–1090.

O. Arandjelovic and R. Cipolla, “Face recognition from video using the generic shape-illumination manifold,” in Proceedings of European Conference on Computer Vision (Springer, 2006), pp. 27–40.

2005 (4)

K. Lee and D. Kriegman, “Online probabilistic appearance manifolds for video-based recognition and tracking,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 852–859.

K. Lee, J. Ho, and D. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Trans. Pattern Anal. Mach. Intell. 27(5), 684–698 (2005).
[CrossRef] [PubMed]

M. Do and M. Vetterli, “The Contourlet transform: an efficient directional multiresolution image representation,” IEEE Trans. Image Process. 14(12), 2091–2106 (2005).
[CrossRef] [PubMed]

P. Phillips, P. Flynn, T. Scruggs, K. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min, and W. Worek, “Overview of the face recognition grand challenge,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 947–954.

2004 (1)

P. Viola and M. Jones, “Robust real-time face detection,” Int. J. Comput. Vision 57(2), 137–154 (2004).
[CrossRef]

2003 (5)

V. Blanz and T. Vetter, “Face recognition based on fitting a 3D morphable model,” IEEE Trans. Pattern Anal. Mach. Intell. bf 25 , 1063–1074 (2003).
[CrossRef]

T. Sim, S. Baker, and M. Bsat, “The CMU pose, illumination, and expression database,” IEEE Trans. Pattern Anal. Mach. Intell. 25(12), 1615–1618 (2003).
[CrossRef]

W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld, “Face recognition: A literature survey,” ACM Comput. Surv. 35(4), 399–458 (2003).
[CrossRef]

Y. Schechner, S. Nayar, and P. Belhumeur, “A theory of multiplexed illumination,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2003), pp. 808–815.
[CrossRef]

R. Basri and D. Jacobs, “Lambertian reflectance and linear subspaces,” IEEE Trans. Pattern Anal. Mach. Intell. 25(2), 218–233 (2003).
[CrossRef]

2002 (1)

D. Scharstein, R. Szeliski, and R. Zabih, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vision 47, 7–42 (2002).
[CrossRef]

2001 (1)

A. Georghiades, P. Belhumeur, and D. Kriegman, “From few to many: Illumination cone models for face recognition under variable lighting and pose,” IEEE Trans. Pattern Anal. Mach. Intell. 6(23), 643–660 (2001).
[CrossRef]

1999 (1)

T. Joachims, “Making large-scale SVM learning practical,” in Advances in Kernel Methods , (MIT-Press, 1999), pp. 169–184.

1998 (1)

P. Belhumeur and D. Kriegman, “What is the set of images of an object under all possible illumination conditions?,” Int. J. Comput. Vision 28(3), 245–260 (1998).
[CrossRef]

1997 (2)

P. Belhumeur, J. Hespanha, and D. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 711–720 (1997).
[CrossRef]

L. Wiskott, J. Fellous, N. Kruger, and C. Malsgurg, “Face recognition by elastic bunch graph matching,” IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 775–779 (1997).
[CrossRef]

1994 (1)

P. Hallinan, “A low-dimensional representation of human faces for arbitrary lighting conditions,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1994), pp. 995–999.
[CrossRef]

1991 (1)

M. Turk and A. Pentland, “Eigenfaces for recognition,” J. Cogn. Neurosci. 3, 71–86 (1991).
[CrossRef]

Arandjelovic, O.

O. Arandjelovic and R. Cipolla, “Face recognition from video using the generic shape-illumination manifold,” in Proceedings of European Conference on Computer Vision (Springer, 2006), pp. 27–40.

Bai, L.

L. Shen and L. Bai, “A review on Gabor Wavelets for face recognition,” Pattern Anal. Appl. 19, 273–292 (2006).
[CrossRef]

Baker, S.

T. Sim, S. Baker, and M. Bsat, “The CMU pose, illumination, and expression database,” IEEE Trans. Pattern Anal. Mach. Intell. 25(12), 1615–1618 (2003).
[CrossRef]

Basri, R.

R. Basri and D. Jacobs, “Lambertian reflectance and linear subspaces,” IEEE Trans. Pattern Anal. Mach. Intell. 25(2), 218–233 (2003).
[CrossRef]

Belhumeur, P.

Y. Schechner, S. Nayar, and P. Belhumeur, “A theory of multiplexed illumination,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2003), pp. 808–815.
[CrossRef]

A. Georghiades, P. Belhumeur, and D. Kriegman, “From few to many: Illumination cone models for face recognition under variable lighting and pose,” IEEE Trans. Pattern Anal. Mach. Intell. 6(23), 643–660 (2001).
[CrossRef]

P. Belhumeur and D. Kriegman, “What is the set of images of an object under all possible illumination conditions?,” Int. J. Comput. Vision 28(3), 245–260 (1998).
[CrossRef]

P. Belhumeur, J. Hespanha, and D. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 711–720 (1997).
[CrossRef]

Blanz, V.

V. Blanz and T. Vetter, “Face recognition based on fitting a 3D morphable model,” IEEE Trans. Pattern Anal. Mach. Intell. bf 25 , 1063–1074 (2003).
[CrossRef]

Boukabou, W. R.

W. R. Boukabou and A. Bouridane, “Contourlet-based feature extraction with PCA for face recognition,” in Proceedings of NASA/ESA Conference on Adaptive Hardware and Systems (IEEE, 2008), pp. 482–486.
[CrossRef]

Bouridane, A.

W. R. Boukabou and A. Bouridane, “Contourlet-based feature extraction with PCA for face recognition,” in Proceedings of NASA/ESA Conference on Adaptive Hardware and Systems (IEEE, 2008), pp. 482–486.
[CrossRef]

Bowyer, K.

K. Bowyer, K. Chang, and P. Flynn, “A survey of approaches and challenges in 3D and multi-modal 3D + 2D face recognition,” Comput. Vis. Image Und. 101, 1–15 (2006).
[CrossRef]

P. Phillips, P. Flynn, T. Scruggs, K. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min, and W. Worek, “Overview of the face recognition grand challenge,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 947–954.

Bsat, M.

T. Sim, S. Baker, and M. Bsat, “The CMU pose, illumination, and expression database,” IEEE Trans. Pattern Anal. Mach. Intell. 25(12), 1615–1618 (2003).
[CrossRef]

Chang, J.

P. Phillips, P. Flynn, T. Scruggs, K. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min, and W. Worek, “Overview of the face recognition grand challenge,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 947–954.

Chang, K.

K. Bowyer, K. Chang, and P. Flynn, “A survey of approaches and challenges in 3D and multi-modal 3D + 2D face recognition,” Comput. Vis. Image Und. 101, 1–15 (2006).
[CrossRef]

Chellappa, R.

W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld, “Face recognition: A literature survey,” ACM Comput. Surv. 35(4), 399–458 (2003).
[CrossRef]

Chen, T.

T. Chen, W. Yin, X. Zhou, D. Comaniciu, and T. Huang, “Total variation models for variable lighting face recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 28(9), 1519–1524 (2006).
[CrossRef] [PubMed]

Cipolla, R.

O. Arandjelovic and R. Cipolla, “Face recognition from video using the generic shape-illumination manifold,” in Proceedings of European Conference on Computer Vision (Springer, 2006), pp. 27–40.

Clark, J.

J. Clark, “Photometric stereo using LCD displays,” Image Vis. Comput. 28(4), 704–714 (2010).
[CrossRef]

Comaniciu, D.

T. Chen, W. Yin, X. Zhou, D. Comaniciu, and T. Huang, “Total variation models for variable lighting face recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 28(9), 1519–1524 (2006).
[CrossRef] [PubMed]

Do, M.

M. Do and M. Vetterli, “The Contourlet transform: an efficient directional multiresolution image representation,” IEEE Trans. Image Process. 14(12), 2091–2106 (2005).
[CrossRef] [PubMed]

Duan, G.

Y. Huang, J. Li, G. Duan, J. Lin, D. Hu, and B. Fu, “Face recognition using illumination invariant features in Contourlet domain,” in Proceedings of International Conference on Apperceiving Computing and Intelligence Analysis (IEEE, 2010), pp. 294–297.
[CrossRef]

Fellous, J.

L. Wiskott, J. Fellous, N. Kruger, and C. Malsgurg, “Face recognition by elastic bunch graph matching,” IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 775–779 (1997).
[CrossRef]

Flynn, P.

K. Bowyer, K. Chang, and P. Flynn, “A survey of approaches and challenges in 3D and multi-modal 3D + 2D face recognition,” Comput. Vis. Image Und. 101, 1–15 (2006).
[CrossRef]

P. Phillips, P. Flynn, T. Scruggs, K. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min, and W. Worek, “Overview of the face recognition grand challenge,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 947–954.

Fu, B.

Y. Huang, J. Li, G. Duan, J. Lin, D. Hu, and B. Fu, “Face recognition using illumination invariant features in Contourlet domain,” in Proceedings of International Conference on Apperceiving Computing and Intelligence Analysis (IEEE, 2010), pp. 294–297.
[CrossRef]

Funk, N.

N. Funk and Y. Yang, “Using a raster display for photometric stereo,” in Proceedings of Canadian Conference on Computer and Robot Vision (IEEE, 2007), pp. 201–207.

Georghiades, A.

A. Georghiades, P. Belhumeur, and D. Kriegman, “From few to many: Illumination cone models for face recognition under variable lighting and pose,” IEEE Trans. Pattern Anal. Mach. Intell. 6(23), 643–660 (2001).
[CrossRef]

Hallinan, P.

P. Hallinan, “A low-dimensional representation of human faces for arbitrary lighting conditions,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1994), pp. 995–999.
[CrossRef]

Hespanha, J.

P. Belhumeur, J. Hespanha, and D. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 711–720 (1997).
[CrossRef]

Ho, J.

K. Lee, J. Ho, and D. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Trans. Pattern Anal. Mach. Intell. 27(5), 684–698 (2005).
[CrossRef] [PubMed]

Hoffman, K.

P. Phillips, P. Flynn, T. Scruggs, K. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min, and W. Worek, “Overview of the face recognition grand challenge,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 947–954.

Hu, D.

Y. Huang, J. Li, G. Duan, J. Lin, D. Hu, and B. Fu, “Face recognition using illumination invariant features in Contourlet domain,” in Proceedings of International Conference on Apperceiving Computing and Intelligence Analysis (IEEE, 2010), pp. 294–297.
[CrossRef]

Huang, T.

T. Chen, W. Yin, X. Zhou, D. Comaniciu, and T. Huang, “Total variation models for variable lighting face recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 28(9), 1519–1524 (2006).
[CrossRef] [PubMed]

Huang, Y.

Y. Huang, J. Li, G. Duan, J. Lin, D. Hu, and B. Fu, “Face recognition using illumination invariant features in Contourlet domain,” in Proceedings of International Conference on Apperceiving Computing and Intelligence Analysis (IEEE, 2010), pp. 294–297.
[CrossRef]

Jacobs, D.

R. Basri and D. Jacobs, “Lambertian reflectance and linear subspaces,” IEEE Trans. Pattern Anal. Mach. Intell. 25(2), 218–233 (2003).
[CrossRef]

Joachims, T.

T. Joachims, “Making large-scale SVM learning practical,” in Advances in Kernel Methods , (MIT-Press, 1999), pp. 169–184.

Jones, M.

P. Viola and M. Jones, “Robust real-time face detection,” Int. J. Comput. Vision 57(2), 137–154 (2004).
[CrossRef]

Kriegman, D.

K. Lee and D. Kriegman, “Online probabilistic appearance manifolds for video-based recognition and tracking,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 852–859.

K. Lee, J. Ho, and D. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Trans. Pattern Anal. Mach. Intell. 27(5), 684–698 (2005).
[CrossRef] [PubMed]

A. Georghiades, P. Belhumeur, and D. Kriegman, “From few to many: Illumination cone models for face recognition under variable lighting and pose,” IEEE Trans. Pattern Anal. Mach. Intell. 6(23), 643–660 (2001).
[CrossRef]

P. Belhumeur and D. Kriegman, “What is the set of images of an object under all possible illumination conditions?,” Int. J. Comput. Vision 28(3), 245–260 (1998).
[CrossRef]

P. Belhumeur, J. Hespanha, and D. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 711–720 (1997).
[CrossRef]

Kruger, N.

L. Wiskott, J. Fellous, N. Kruger, and C. Malsgurg, “Face recognition by elastic bunch graph matching,” IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 775–779 (1997).
[CrossRef]

Lee, K.

K. Lee and D. Kriegman, “Online probabilistic appearance manifolds for video-based recognition and tracking,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 852–859.

K. Lee, J. Ho, and D. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Trans. Pattern Anal. Mach. Intell. 27(5), 684–698 (2005).
[CrossRef] [PubMed]

Li, J.

Y. Huang, J. Li, G. Duan, J. Lin, D. Hu, and B. Fu, “Face recognition using illumination invariant features in Contourlet domain,” in Proceedings of International Conference on Apperceiving Computing and Intelligence Analysis (IEEE, 2010), pp. 294–297.
[CrossRef]

Lin, J.

Y. Huang, J. Li, G. Duan, J. Lin, D. Hu, and B. Fu, “Face recognition using illumination invariant features in Contourlet domain,” in Proceedings of International Conference on Apperceiving Computing and Intelligence Analysis (IEEE, 2010), pp. 294–297.
[CrossRef]

Liu, L.

L. Liu, Y. Wang, and T. Tan, “Online appearance model learning for video-based face recognition,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–7.

Malsgurg, C.

L. Wiskott, J. Fellous, N. Kruger, and C. Malsgurg, “Face recognition by elastic bunch graph matching,” IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 775–779 (1997).
[CrossRef]

Marques, J.

P. Phillips, P. Flynn, T. Scruggs, K. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min, and W. Worek, “Overview of the face recognition grand challenge,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 947–954.

Mian, A.

A. Mian, “Face recognition using Contourlet transform and multidirectional illumination from a computer screen,” in Proceedings of Advanced Concepts for Intelligent Vision Systems (Springer, 2010), pp. 332–334.
[CrossRef]

Min, J.

P. Phillips, P. Flynn, T. Scruggs, K. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min, and W. Worek, “Overview of the face recognition grand challenge,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 947–954.

Nayar, S.

Y. Schechner, S. Nayar, and P. Belhumeur, “A theory of multiplexed illumination,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2003), pp. 808–815.
[CrossRef]

Pentland, A.

M. Turk and A. Pentland, “Eigenfaces for recognition,” J. Cogn. Neurosci. 3, 71–86 (1991).
[CrossRef]

Phillips, P.

P. Phillips, P. Flynn, T. Scruggs, K. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min, and W. Worek, “Overview of the face recognition grand challenge,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 947–954.

Phillips, P. J.

W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld, “Face recognition: A literature survey,” ACM Comput. Surv. 35(4), 399–458 (2003).
[CrossRef]

Rosenfeld, A.

W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld, “Face recognition: A literature survey,” ACM Comput. Surv. 35(4), 399–458 (2003).
[CrossRef]

Scharstein, D.

D. Scharstein, R. Szeliski, and R. Zabih, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vision 47, 7–42 (2002).
[CrossRef]

Schechner, Y.

Y. Schechner, S. Nayar, and P. Belhumeur, “A theory of multiplexed illumination,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2003), pp. 808–815.
[CrossRef]

Schindler, G.

G. Schindler, “Photometric stereo via computer screen lighting for real-time surface reconstruction,” in Proceedings of International Symposium on 3D Data Processing, Visualization and Transmission (IEEE, 2008).

Schouten, B.

J. Tangelder and B. Schouten, “Learning a sparse representation from multiple still images for on-line face recognition in an unconstrained environment,” in Proceedings of International Conference on Pattern Recognition (IEEE, 2006), pp. 10867–1090.

Scruggs, T.

P. Phillips, P. Flynn, T. Scruggs, K. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min, and W. Worek, “Overview of the face recognition grand challenge,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 947–954.

Shen, L.

L. Shen and L. Bai, “A review on Gabor Wavelets for face recognition,” Pattern Anal. Appl. 19, 273–292 (2006).
[CrossRef]

Sim, T.

T. Sim, S. Baker, and M. Bsat, “The CMU pose, illumination, and expression database,” IEEE Trans. Pattern Anal. Mach. Intell. 25(12), 1615–1618 (2003).
[CrossRef]

Szeliski, R.

D. Scharstein, R. Szeliski, and R. Zabih, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vision 47, 7–42 (2002).
[CrossRef]

Tan, T.

L. Liu, Y. Wang, and T. Tan, “Online appearance model learning for video-based face recognition,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–7.

Tan, X.

X. Tan and B. Triggs, “Enhanced local texture feature sets for face recognition under difficult lighting conditions,” in Proceedings of IEEE International Workshop on Analysis and Modeling of Faces and Gestures (IEEE, 2007).
[CrossRef]

Tangelder, J.

J. Tangelder and B. Schouten, “Learning a sparse representation from multiple still images for on-line face recognition in an unconstrained environment,” in Proceedings of International Conference on Pattern Recognition (IEEE, 2006), pp. 10867–1090.

Triggs, B.

X. Tan and B. Triggs, “Enhanced local texture feature sets for face recognition under difficult lighting conditions,” in Proceedings of IEEE International Workshop on Analysis and Modeling of Faces and Gestures (IEEE, 2007).
[CrossRef]

Turk, M.

M. Turk and A. Pentland, “Eigenfaces for recognition,” J. Cogn. Neurosci. 3, 71–86 (1991).
[CrossRef]

Vetter, T.

V. Blanz and T. Vetter, “Face recognition based on fitting a 3D morphable model,” IEEE Trans. Pattern Anal. Mach. Intell. bf 25 , 1063–1074 (2003).
[CrossRef]

Vetterli, M.

M. Do and M. Vetterli, “The Contourlet transform: an efficient directional multiresolution image representation,” IEEE Trans. Image Process. 14(12), 2091–2106 (2005).
[CrossRef] [PubMed]

Viola, P.

P. Viola and M. Jones, “Robust real-time face detection,” Int. J. Comput. Vision 57(2), 137–154 (2004).
[CrossRef]

Wang, Y.

L. Liu, Y. Wang, and T. Tan, “Online appearance model learning for video-based face recognition,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–7.

Wiskott, L.

L. Wiskott, J. Fellous, N. Kruger, and C. Malsgurg, “Face recognition by elastic bunch graph matching,” IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 775–779 (1997).
[CrossRef]

Worek, W.

P. Phillips, P. Flynn, T. Scruggs, K. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min, and W. Worek, “Overview of the face recognition grand challenge,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 947–954.

Yang, Y.

N. Funk and Y. Yang, “Using a raster display for photometric stereo,” in Proceedings of Canadian Conference on Computer and Robot Vision (IEEE, 2007), pp. 201–207.

Yin, W.

T. Chen, W. Yin, X. Zhou, D. Comaniciu, and T. Huang, “Total variation models for variable lighting face recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 28(9), 1519–1524 (2006).
[CrossRef] [PubMed]

Zabih, R.

D. Scharstein, R. Szeliski, and R. Zabih, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vision 47, 7–42 (2002).
[CrossRef]

Zhao, W.

W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld, “Face recognition: A literature survey,” ACM Comput. Surv. 35(4), 399–458 (2003).
[CrossRef]

Zhou, X.

T. Chen, W. Yin, X. Zhou, D. Comaniciu, and T. Huang, “Total variation models for variable lighting face recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 28(9), 1519–1524 (2006).
[CrossRef] [PubMed]

ACM Comput. Surv. (1)

W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld, “Face recognition: A literature survey,” ACM Comput. Surv. 35(4), 399–458 (2003).
[CrossRef]

Comput. Vis. Image Und. (1)

K. Bowyer, K. Chang, and P. Flynn, “A survey of approaches and challenges in 3D and multi-modal 3D + 2D face recognition,” Comput. Vis. Image Und. 101, 1–15 (2006).
[CrossRef]

IEEE Trans. Image Process. (1)

M. Do and M. Vetterli, “The Contourlet transform: an efficient directional multiresolution image representation,” IEEE Trans. Image Process. 14(12), 2091–2106 (2005).
[CrossRef] [PubMed]

IEEE Trans. Pattern Anal. Mach. Intell. (7)

P. Belhumeur, J. Hespanha, and D. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 711–720 (1997).
[CrossRef]

L. Wiskott, J. Fellous, N. Kruger, and C. Malsgurg, “Face recognition by elastic bunch graph matching,” IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 775–779 (1997).
[CrossRef]

A. Georghiades, P. Belhumeur, and D. Kriegman, “From few to many: Illumination cone models for face recognition under variable lighting and pose,” IEEE Trans. Pattern Anal. Mach. Intell. 6(23), 643–660 (2001).
[CrossRef]

R. Basri and D. Jacobs, “Lambertian reflectance and linear subspaces,” IEEE Trans. Pattern Anal. Mach. Intell. 25(2), 218–233 (2003).
[CrossRef]

K. Lee, J. Ho, and D. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Trans. Pattern Anal. Mach. Intell. 27(5), 684–698 (2005).
[CrossRef] [PubMed]

T. Sim, S. Baker, and M. Bsat, “The CMU pose, illumination, and expression database,” IEEE Trans. Pattern Anal. Mach. Intell. 25(12), 1615–1618 (2003).
[CrossRef]

T. Chen, W. Yin, X. Zhou, D. Comaniciu, and T. Huang, “Total variation models for variable lighting face recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 28(9), 1519–1524 (2006).
[CrossRef] [PubMed]

IEEE Trans. Pattern Anal. Mach. Intell. bf 25 (1)

V. Blanz and T. Vetter, “Face recognition based on fitting a 3D morphable model,” IEEE Trans. Pattern Anal. Mach. Intell. bf 25 , 1063–1074 (2003).
[CrossRef]

Image Vis. Comput. (1)

J. Clark, “Photometric stereo using LCD displays,” Image Vis. Comput. 28(4), 704–714 (2010).
[CrossRef]

Int. J. Comput. Vision (3)

P. Belhumeur and D. Kriegman, “What is the set of images of an object under all possible illumination conditions?,” Int. J. Comput. Vision 28(3), 245–260 (1998).
[CrossRef]

P. Viola and M. Jones, “Robust real-time face detection,” Int. J. Comput. Vision 57(2), 137–154 (2004).
[CrossRef]

D. Scharstein, R. Szeliski, and R. Zabih, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vision 47, 7–42 (2002).
[CrossRef]

J. Cogn. Neurosci. (1)

M. Turk and A. Pentland, “Eigenfaces for recognition,” J. Cogn. Neurosci. 3, 71–86 (1991).
[CrossRef]

Pattern Anal. Appl. (1)

L. Shen and L. Bai, “A review on Gabor Wavelets for face recognition,” Pattern Anal. Appl. 19, 273–292 (2006).
[CrossRef]

Other (15)

J. D’Erico, “Surface fitting using Gridfit,” http://www.mathworks.com/matlabcentral/fileexchange/ .

P. Phillips, P. Flynn, T. Scruggs, K. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min, and W. Worek, “Overview of the face recognition grand challenge,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 947–954.

G. Schindler, “Photometric stereo via computer screen lighting for real-time surface reconstruction,” in Proceedings of International Symposium on 3D Data Processing, Visualization and Transmission (IEEE, 2008).

N. Funk and Y. Yang, “Using a raster display for photometric stereo,” in Proceedings of Canadian Conference on Computer and Robot Vision (IEEE, 2007), pp. 201–207.

X. Tan and B. Triggs, “Enhanced local texture feature sets for face recognition under difficult lighting conditions,” in Proceedings of IEEE International Workshop on Analysis and Modeling of Faces and Gestures (IEEE, 2007).
[CrossRef]

T. Joachims, “Making large-scale SVM learning practical,” in Advances in Kernel Methods , (MIT-Press, 1999), pp. 169–184.

O. Arandjelovic and R. Cipolla, “Face recognition from video using the generic shape-illumination manifold,” in Proceedings of European Conference on Computer Vision (Springer, 2006), pp. 27–40.

K. Lee and D. Kriegman, “Online probabilistic appearance manifolds for video-based recognition and tracking,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 852–859.

L. Liu, Y. Wang, and T. Tan, “Online appearance model learning for video-based face recognition,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–7.

J. Tangelder and B. Schouten, “Learning a sparse representation from multiple still images for on-line face recognition in an unconstrained environment,” in Proceedings of International Conference on Pattern Recognition (IEEE, 2006), pp. 10867–1090.

P. Hallinan, “A low-dimensional representation of human faces for arbitrary lighting conditions,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1994), pp. 995–999.
[CrossRef]

Y. Schechner, S. Nayar, and P. Belhumeur, “A theory of multiplexed illumination,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2003), pp. 808–815.
[CrossRef]

W. R. Boukabou and A. Bouridane, “Contourlet-based feature extraction with PCA for face recognition,” in Proceedings of NASA/ESA Conference on Adaptive Hardware and Systems (IEEE, 2008), pp. 482–486.
[CrossRef]

Y. Huang, J. Li, G. Duan, J. Lin, D. Hu, and B. Fu, “Face recognition using illumination invariant features in Contourlet domain,” in Proceedings of International Conference on Apperceiving Computing and Intelligence Analysis (IEEE, 2010), pp. 294–297.
[CrossRef]

A. Mian, “Face recognition using Contourlet transform and multidirectional illumination from a computer screen,” in Proceedings of Advanced Concepts for Intelligent Vision Systems (Springer, 2010), pp. 332–334.
[CrossRef]

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1

Multiple images of a face are acquired while illumination is varied by scanning a white stripe on a computer screen.

Fig. 2
Fig. 2

Sample faces after preprocessing.

Fig. 3
Fig. 3

Contourlet coefficients of a sample face.

Fig. 4
Fig. 4

Preprocessed 3D faces from our database and the average face (last).

Fig. 5
Fig. 5

Exp-1: Left: Recognition rate vs. the number of subspace Contourlet coefficients. Right: Recognition rates for individual images/illumination conditions (x-axis).

Fig. 6
Fig. 6

Exp-2 results for our database. CMC (left) and ROC (right) curves for different number of training images. Recognition was performed on the basis of a single image.

Fig. 7
Fig. 7

Exp-2 results for the extended Yale B database. CMC (left) and ROC (right) curves for different number of training images.

Fig. 8
Fig. 8

Exp-2: Error rates on the extended Yale B database for direct comparison with [15].

Fig. 9
Fig. 9

Exp-3 results for our database.

Fig. 10
Fig. 10

Average recognition/verification (at 0.001FAR) rates versus the number of images in a many to many matching approach. Standard deviation is shown as vertical lines.

Fig. 11
Fig. 11

Ground truth (top) and reconstructed (bottom) 3D faces from (a)(b) Exp-4 and (c)(d) Exp-5. (e) Reconstruction error of database (Exp-4) and unseen faces (Exp-5).

Fig. 12
Fig. 12

Histogram of reconstruction errors in the Euclidean space. Left: Exp-4 (database faces). Right: Exp-5 (unseen faces).

Fig. 13
Fig. 13

Top: Ground truth images of the first five subjects in the Yale B database (pose 7). Middle: 3D reconstructed faces rotated so that their pose is approximately equal to the top row for comparison. Last: The 3D faces are further rotated to show their profile.

Tables (6)

Tables Icon

Table 1 Experiment 2 Results (in %) Using our Database

Tables Icon

Table 2 Experiment 2 Results (in %) Using the Extended Yale B Database

Tables Icon

Table 3 Experiment 3 Results (in %) Using our Database

Tables Icon

Table 4 Comparison on the Yale B (10 Subjects) and Extended Yale B Databases a

Tables Icon

Table 5 Avg. Error (%) for Each Coefficient

Tables Icon

Table 6 Exp-6: Average Error and Standard Deviation (%) for Each Coefficient Using 20 Combinations of Training and Test Images

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

μ sk = 1 N × G n = 1 N × G A n sk ,
C sk = 1 N × G n = 1 N × G ( A n sk μ sk ) ( A n sk μ sk ) T .
U sk S sk ( V sk ) T = C sk ,
B sk = ( U L sk ) T ( A sk μ sk p ) ,
U S V T = C ,
U = B U / diag ( S ) .
F = U T B
γ = n t q t q n ( t ) 2 ( t ) 2 n ( q ) 2 ( q ) 2 ,
U S V = C ,
U i = 1 λ i I U i ,
C e = 1 n j = 1 n ( E j μ ) T ( E j μ ) ,
U e S e V e = C e ,
U j = 1 λ j ( E μ p ) U j , for j = 1 ( n 1 ) ,
F = U T E ,

Metrics