Abstract

Capturing surface appearance is a challenging task because reflectance varies as a function of viewing and illumination direction. In addition, most real-world surfaces have a textured appearance, so reflectance also varies spatially. We present a texture camera that can conveniently capture spatially varying reflectance on a surface. Unlike other bidirectional imaging devices, the design eliminates the need for complex mechanical apparatus to move the light source and the camera over a hemisphere of possible directions. To facilitate fast and convenient measurement, the device uses a curved mirror so that multiple views of the same surface point are captured simultaneously. Simple planar motions of the imaging components also permit change of illumination direction and region imaging. We present the current prototype of this device, imaging results, and an analysis of the important imaging properties.

© 2004 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. S. C. Foo, K. E. Torrance, “Equipment acquisition for the light measurement laboratory of the Cornell program of computer graphics,” (Program of Computer Graphics, Cornell University, Ithaca, N.Y., 1995).
  2. K. J. Dana, B. van Ginneken, S. K. Nayar, J. J. Koenderink, “Reflectance and texture of real world surfaces,” ACM Trans. Graph. 18, 1–34 (1999).
    [CrossRef]
  3. K. J. Dana, S. K. Nayar, “Histogram model for 3D textures,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1998), pp. 618–624.
  4. K. J. Dana, S. K. Nayar, “3D textured surface modeling,” in Workshop on the Integration of Appearance and Geometric Methods in Object Recognition (Institute of Electrical and Electronics Engineers, New York, 1999), pp. 46–56.
  5. K. J. Dana, S. K. Nayar, “Correlation model for 3D texture,” in Proceedings of the International Conference on Computer Vision (IEEE Computer Society Press, Los Alamitos, Calif., 1991), pp. 1061–1067.
  6. J. J. Koenderink, A. J. van Doorn, K. J. Dana, S. K. Nayar, “Bidirectional reflection distribution function of thoroughly pitted surfaces,” Int. J. Comput. Vision 31, 129–144 (1999).
    [CrossRef]
  7. B. van Ginneken, J. J. Koenderink, K. J. Dana, “Texture histograms as a function of irradiation and viewing direction,” Int. J. Comput. Vision 31, 169–184 (1999).
    [CrossRef]
  8. X. Liu, Y. Yu, H.-Y. Shum, “Synthesizing bidirectional texture functions for real world surfaces,” in SIGGRAPH’01, Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, pp. 97–106; www.siggraph.org .
  9. X. Tong, J. Zhang, L. Liu, X. Wang, B. Guo, H. Y. Shum, “Synthesis of bidirectional texture functions on arbitrary surfaces,” ACM Trans. Graph. 21, 665–672 (2002).
    [CrossRef]
  10. T. Leung, J. Malik, “Recognizing surfaces using three-dimensional textons,” in Proceedings of the International Conference on Computer Vision (IEEE Computer Society Press, Los Alamitos, Calif., 1999), pp. 1010–1017.
  11. P. Suen, G. Healey, “Analyzing the bidirectional texture function,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1998), pp. 753–758.
  12. O. G. Cula, K. J. Dana, “Recognition methods for 3D textured surfaces,” in Human Vision and Electronic Imaging VI, B. E. Rogowitz, T. N. Pappas, eds., Proc. SPIE4299, 209–220 (2001).
    [CrossRef]
  13. O. G. Cula, K. J. Dana, “Compact representation of bidirectional texture functions,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 2001), pp. 1041–1067.
  14. A. Zalesny, L. Van Gool, “Multiview texture models,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 2001), pp. 615–622.
  15. M. Varma, A. Zisserman, “Classifying images of materials,” in Proceedings of the European Conference on Computer Vision (Springer-Verlag, Berlin, 2002), pp. 255–271.
  16. K. J. Dana, “BRDF/BTF measurement device,” in Proceedings of the International Conference on Computer Vision (IEEE Computer Society Press, Los Alamitos, Calif., 2001), pp. 460–466.
  17. S. K. Nayar, “Catadioptric omnidirectional camera,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1997), pp. 482–488.
  18. S. Baker, S. K. Nayar, “A theory of single-viewpoint catadioptric image formation,” Int. J. Comput. Vision 35, 175–196 (1999).
    [CrossRef]
  19. G. J. Ward, “Measuring and modeling anisotropic reflection,” Comput. Graph. 26, 265–272 (1992).
    [CrossRef]
  20. K. J. Davis, D. C. Rawlings, “Directional reflectometer for measuring optical bidirectional reflectance,” U.S. patent5,637,873 (June10, 1997).
  21. P. R. Mattison, M. S. Dombrowski, J. Lorenz, K. Davis, H. Mann, P. Johnson, B. Foos, “Hand-held directional reflectometer: an angular imaging device to measure BRDF and HDR in real-time,” in International Society for Optical Engineering, Scattering and Surface Roughness II, Z. Gu, A. A. Maradudin, eds., Proc. SPIE3426, 240–251 (1998).
    [CrossRef]
  22. R. R. Carter, L. K. Pleskot, “Imaging scatterometer,” U.S. patent5,912,741 (June1999).
  23. S. R. Marschner, S. H. Westin, E. Lafortune, K. E. Torrance, “Image-based bidirectional reflectance distribution function measurement,” Appl. Opt. 39, 2592–2600 (2000).
    [CrossRef]
  24. C. J. R. Sheppard, “Resolution for off-axis illumination,” J. Opt. Soc. Am. A 15, 622–624 (1998).
    [CrossRef]
  25. O. G. Cula, K. J. Dana, F. P. Murphy, B. K. Rao, “Skin texture modeling,” Int. J. Comput. Vision (to be published).
  26. M. Turk, A. Pentland, “Eigenfaces for recognition,” J. Cogn. Neurosci. 3, 71–86 (1991).
    [CrossRef] [PubMed]
  27. A. Pentland, B. Moghaddam, T. Starner, “View-based and modular eigenspaces for face recognition,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1994), pp. 84–91.
  28. H. Murase, S. K. Nayar, “Visual learning and recognition of 3-D objects from appearance,” Int. J. Comput. Vision 14, 5–24 (1995).
    [CrossRef]
  29. S. E. Chen, L. Williams, “View interpolation for image synthesis,” in SIGGRAPH’93, Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques, pp. 279–288; www.siggraph.org .
  30. L. McMillan, G. Bishop, “Plenoptic modeling: an image-based rendering system,” in SIGGRAPH’95, Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, pp. 39–46; www.siggraph.org .
  31. S. E. Chen, “Quicktime VR—an image-based approach to virtual environment navigation,” in SIGGRAPH’95, Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, pp. 29–39; www.siggraph.org .
  32. M. Levoy, P. Hanrahan, “Light field rendering,” in SIGGRAPH’96, Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, pp. 31–42; www.siggraph.org .
  33. S. J. Gortler, R. Grzeszczuk, R. Szeliski, M. F. Cohen, “The lumigraph,” in SIGGRAPH’96, Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, pp. 43–54; www.siggraph.org .
  34. S. M. Seitz, C. R. Dyer, “View morphing,” in SIGGRAPH’96, Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, pp. 21–30; www.siggraph.org .
  35. P. E. Debevec, C. J. Taylor, J. Malik, “Modeling and rendering architecture from photographs: a hybrid geometry and image-based approach,” in SIGGRAPH’96, Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, pp. 11–20; www.siggraph.org .
  36. F. Bernardini, I. Martin, H. Rushmeier, “High quality texture reconstruction from multiple scans,” IEEE Trans. Visualization Comput. Graph. 7, 318–332 (2001).
    [CrossRef]
  37. F. Bernardini, I. Martin, J. Mittleman, H. Rushmeier, G. Taubin, “Building a digital model of Michelangelo’s Florentine Pieta,” IEEE Comput. Graph. Appl. 1(22), 59–67 (2002).
    [CrossRef]
  38. M. Levoy, K. Pulli, B. Curless, S. Rusinkiewicz, D. Koller, L. Pereira, M. Ginzton, S. Anderson, J. Davis, J. Ginsberg, J. Shade, D. Fulk, “The digital Michelangelo Project: 3D scanning of large statues,” in SIGGRAPH’00, Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, pp. 131–144; www.siggraph.org .

2002

X. Tong, J. Zhang, L. Liu, X. Wang, B. Guo, H. Y. Shum, “Synthesis of bidirectional texture functions on arbitrary surfaces,” ACM Trans. Graph. 21, 665–672 (2002).
[CrossRef]

F. Bernardini, I. Martin, J. Mittleman, H. Rushmeier, G. Taubin, “Building a digital model of Michelangelo’s Florentine Pieta,” IEEE Comput. Graph. Appl. 1(22), 59–67 (2002).
[CrossRef]

2001

F. Bernardini, I. Martin, H. Rushmeier, “High quality texture reconstruction from multiple scans,” IEEE Trans. Visualization Comput. Graph. 7, 318–332 (2001).
[CrossRef]

2000

1999

K. J. Dana, B. van Ginneken, S. K. Nayar, J. J. Koenderink, “Reflectance and texture of real world surfaces,” ACM Trans. Graph. 18, 1–34 (1999).
[CrossRef]

J. J. Koenderink, A. J. van Doorn, K. J. Dana, S. K. Nayar, “Bidirectional reflection distribution function of thoroughly pitted surfaces,” Int. J. Comput. Vision 31, 129–144 (1999).
[CrossRef]

B. van Ginneken, J. J. Koenderink, K. J. Dana, “Texture histograms as a function of irradiation and viewing direction,” Int. J. Comput. Vision 31, 169–184 (1999).
[CrossRef]

S. Baker, S. K. Nayar, “A theory of single-viewpoint catadioptric image formation,” Int. J. Comput. Vision 35, 175–196 (1999).
[CrossRef]

1998

1995

H. Murase, S. K. Nayar, “Visual learning and recognition of 3-D objects from appearance,” Int. J. Comput. Vision 14, 5–24 (1995).
[CrossRef]

1992

G. J. Ward, “Measuring and modeling anisotropic reflection,” Comput. Graph. 26, 265–272 (1992).
[CrossRef]

1991

M. Turk, A. Pentland, “Eigenfaces for recognition,” J. Cogn. Neurosci. 3, 71–86 (1991).
[CrossRef] [PubMed]

Baker, S.

S. Baker, S. K. Nayar, “A theory of single-viewpoint catadioptric image formation,” Int. J. Comput. Vision 35, 175–196 (1999).
[CrossRef]

Bernardini, F.

F. Bernardini, I. Martin, J. Mittleman, H. Rushmeier, G. Taubin, “Building a digital model of Michelangelo’s Florentine Pieta,” IEEE Comput. Graph. Appl. 1(22), 59–67 (2002).
[CrossRef]

F. Bernardini, I. Martin, H. Rushmeier, “High quality texture reconstruction from multiple scans,” IEEE Trans. Visualization Comput. Graph. 7, 318–332 (2001).
[CrossRef]

Carter, R. R.

R. R. Carter, L. K. Pleskot, “Imaging scatterometer,” U.S. patent5,912,741 (June1999).

Cula, O. G.

O. G. Cula, K. J. Dana, F. P. Murphy, B. K. Rao, “Skin texture modeling,” Int. J. Comput. Vision (to be published).

O. G. Cula, K. J. Dana, “Recognition methods for 3D textured surfaces,” in Human Vision and Electronic Imaging VI, B. E. Rogowitz, T. N. Pappas, eds., Proc. SPIE4299, 209–220 (2001).
[CrossRef]

O. G. Cula, K. J. Dana, “Compact representation of bidirectional texture functions,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 2001), pp. 1041–1067.

Dana, K. J.

J. J. Koenderink, A. J. van Doorn, K. J. Dana, S. K. Nayar, “Bidirectional reflection distribution function of thoroughly pitted surfaces,” Int. J. Comput. Vision 31, 129–144 (1999).
[CrossRef]

K. J. Dana, B. van Ginneken, S. K. Nayar, J. J. Koenderink, “Reflectance and texture of real world surfaces,” ACM Trans. Graph. 18, 1–34 (1999).
[CrossRef]

B. van Ginneken, J. J. Koenderink, K. J. Dana, “Texture histograms as a function of irradiation and viewing direction,” Int. J. Comput. Vision 31, 169–184 (1999).
[CrossRef]

O. G. Cula, K. J. Dana, F. P. Murphy, B. K. Rao, “Skin texture modeling,” Int. J. Comput. Vision (to be published).

K. J. Dana, S. K. Nayar, “Histogram model for 3D textures,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1998), pp. 618–624.

K. J. Dana, S. K. Nayar, “3D textured surface modeling,” in Workshop on the Integration of Appearance and Geometric Methods in Object Recognition (Institute of Electrical and Electronics Engineers, New York, 1999), pp. 46–56.

O. G. Cula, K. J. Dana, “Recognition methods for 3D textured surfaces,” in Human Vision and Electronic Imaging VI, B. E. Rogowitz, T. N. Pappas, eds., Proc. SPIE4299, 209–220 (2001).
[CrossRef]

K. J. Dana, “BRDF/BTF measurement device,” in Proceedings of the International Conference on Computer Vision (IEEE Computer Society Press, Los Alamitos, Calif., 2001), pp. 460–466.

K. J. Dana, S. K. Nayar, “Correlation model for 3D texture,” in Proceedings of the International Conference on Computer Vision (IEEE Computer Society Press, Los Alamitos, Calif., 1991), pp. 1061–1067.

O. G. Cula, K. J. Dana, “Compact representation of bidirectional texture functions,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 2001), pp. 1041–1067.

Davis, K.

P. R. Mattison, M. S. Dombrowski, J. Lorenz, K. Davis, H. Mann, P. Johnson, B. Foos, “Hand-held directional reflectometer: an angular imaging device to measure BRDF and HDR in real-time,” in International Society for Optical Engineering, Scattering and Surface Roughness II, Z. Gu, A. A. Maradudin, eds., Proc. SPIE3426, 240–251 (1998).
[CrossRef]

Davis, K. J.

K. J. Davis, D. C. Rawlings, “Directional reflectometer for measuring optical bidirectional reflectance,” U.S. patent5,637,873 (June10, 1997).

Dombrowski, M. S.

P. R. Mattison, M. S. Dombrowski, J. Lorenz, K. Davis, H. Mann, P. Johnson, B. Foos, “Hand-held directional reflectometer: an angular imaging device to measure BRDF and HDR in real-time,” in International Society for Optical Engineering, Scattering and Surface Roughness II, Z. Gu, A. A. Maradudin, eds., Proc. SPIE3426, 240–251 (1998).
[CrossRef]

Foo, S. C.

S. C. Foo, K. E. Torrance, “Equipment acquisition for the light measurement laboratory of the Cornell program of computer graphics,” (Program of Computer Graphics, Cornell University, Ithaca, N.Y., 1995).

Foos, B.

P. R. Mattison, M. S. Dombrowski, J. Lorenz, K. Davis, H. Mann, P. Johnson, B. Foos, “Hand-held directional reflectometer: an angular imaging device to measure BRDF and HDR in real-time,” in International Society for Optical Engineering, Scattering and Surface Roughness II, Z. Gu, A. A. Maradudin, eds., Proc. SPIE3426, 240–251 (1998).
[CrossRef]

Guo, B.

X. Tong, J. Zhang, L. Liu, X. Wang, B. Guo, H. Y. Shum, “Synthesis of bidirectional texture functions on arbitrary surfaces,” ACM Trans. Graph. 21, 665–672 (2002).
[CrossRef]

Healey, G.

P. Suen, G. Healey, “Analyzing the bidirectional texture function,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1998), pp. 753–758.

Johnson, P.

P. R. Mattison, M. S. Dombrowski, J. Lorenz, K. Davis, H. Mann, P. Johnson, B. Foos, “Hand-held directional reflectometer: an angular imaging device to measure BRDF and HDR in real-time,” in International Society for Optical Engineering, Scattering and Surface Roughness II, Z. Gu, A. A. Maradudin, eds., Proc. SPIE3426, 240–251 (1998).
[CrossRef]

Koenderink, J. J.

B. van Ginneken, J. J. Koenderink, K. J. Dana, “Texture histograms as a function of irradiation and viewing direction,” Int. J. Comput. Vision 31, 169–184 (1999).
[CrossRef]

K. J. Dana, B. van Ginneken, S. K. Nayar, J. J. Koenderink, “Reflectance and texture of real world surfaces,” ACM Trans. Graph. 18, 1–34 (1999).
[CrossRef]

J. J. Koenderink, A. J. van Doorn, K. J. Dana, S. K. Nayar, “Bidirectional reflection distribution function of thoroughly pitted surfaces,” Int. J. Comput. Vision 31, 129–144 (1999).
[CrossRef]

Lafortune, E.

Leung, T.

T. Leung, J. Malik, “Recognizing surfaces using three-dimensional textons,” in Proceedings of the International Conference on Computer Vision (IEEE Computer Society Press, Los Alamitos, Calif., 1999), pp. 1010–1017.

Liu, L.

X. Tong, J. Zhang, L. Liu, X. Wang, B. Guo, H. Y. Shum, “Synthesis of bidirectional texture functions on arbitrary surfaces,” ACM Trans. Graph. 21, 665–672 (2002).
[CrossRef]

Lorenz, J.

P. R. Mattison, M. S. Dombrowski, J. Lorenz, K. Davis, H. Mann, P. Johnson, B. Foos, “Hand-held directional reflectometer: an angular imaging device to measure BRDF and HDR in real-time,” in International Society for Optical Engineering, Scattering and Surface Roughness II, Z. Gu, A. A. Maradudin, eds., Proc. SPIE3426, 240–251 (1998).
[CrossRef]

Malik, J.

T. Leung, J. Malik, “Recognizing surfaces using three-dimensional textons,” in Proceedings of the International Conference on Computer Vision (IEEE Computer Society Press, Los Alamitos, Calif., 1999), pp. 1010–1017.

Mann, H.

P. R. Mattison, M. S. Dombrowski, J. Lorenz, K. Davis, H. Mann, P. Johnson, B. Foos, “Hand-held directional reflectometer: an angular imaging device to measure BRDF and HDR in real-time,” in International Society for Optical Engineering, Scattering and Surface Roughness II, Z. Gu, A. A. Maradudin, eds., Proc. SPIE3426, 240–251 (1998).
[CrossRef]

Marschner, S. R.

Martin, I.

F. Bernardini, I. Martin, J. Mittleman, H. Rushmeier, G. Taubin, “Building a digital model of Michelangelo’s Florentine Pieta,” IEEE Comput. Graph. Appl. 1(22), 59–67 (2002).
[CrossRef]

F. Bernardini, I. Martin, H. Rushmeier, “High quality texture reconstruction from multiple scans,” IEEE Trans. Visualization Comput. Graph. 7, 318–332 (2001).
[CrossRef]

Mattison, P. R.

P. R. Mattison, M. S. Dombrowski, J. Lorenz, K. Davis, H. Mann, P. Johnson, B. Foos, “Hand-held directional reflectometer: an angular imaging device to measure BRDF and HDR in real-time,” in International Society for Optical Engineering, Scattering and Surface Roughness II, Z. Gu, A. A. Maradudin, eds., Proc. SPIE3426, 240–251 (1998).
[CrossRef]

Mittleman, J.

F. Bernardini, I. Martin, J. Mittleman, H. Rushmeier, G. Taubin, “Building a digital model of Michelangelo’s Florentine Pieta,” IEEE Comput. Graph. Appl. 1(22), 59–67 (2002).
[CrossRef]

Moghaddam, B.

A. Pentland, B. Moghaddam, T. Starner, “View-based and modular eigenspaces for face recognition,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1994), pp. 84–91.

Murase, H.

H. Murase, S. K. Nayar, “Visual learning and recognition of 3-D objects from appearance,” Int. J. Comput. Vision 14, 5–24 (1995).
[CrossRef]

Murphy, F. P.

O. G. Cula, K. J. Dana, F. P. Murphy, B. K. Rao, “Skin texture modeling,” Int. J. Comput. Vision (to be published).

Nayar, S. K.

S. Baker, S. K. Nayar, “A theory of single-viewpoint catadioptric image formation,” Int. J. Comput. Vision 35, 175–196 (1999).
[CrossRef]

K. J. Dana, B. van Ginneken, S. K. Nayar, J. J. Koenderink, “Reflectance and texture of real world surfaces,” ACM Trans. Graph. 18, 1–34 (1999).
[CrossRef]

J. J. Koenderink, A. J. van Doorn, K. J. Dana, S. K. Nayar, “Bidirectional reflection distribution function of thoroughly pitted surfaces,” Int. J. Comput. Vision 31, 129–144 (1999).
[CrossRef]

H. Murase, S. K. Nayar, “Visual learning and recognition of 3-D objects from appearance,” Int. J. Comput. Vision 14, 5–24 (1995).
[CrossRef]

S. K. Nayar, “Catadioptric omnidirectional camera,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1997), pp. 482–488.

K. J. Dana, S. K. Nayar, “Correlation model for 3D texture,” in Proceedings of the International Conference on Computer Vision (IEEE Computer Society Press, Los Alamitos, Calif., 1991), pp. 1061–1067.

K. J. Dana, S. K. Nayar, “Histogram model for 3D textures,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1998), pp. 618–624.

K. J. Dana, S. K. Nayar, “3D textured surface modeling,” in Workshop on the Integration of Appearance and Geometric Methods in Object Recognition (Institute of Electrical and Electronics Engineers, New York, 1999), pp. 46–56.

Pentland, A.

M. Turk, A. Pentland, “Eigenfaces for recognition,” J. Cogn. Neurosci. 3, 71–86 (1991).
[CrossRef] [PubMed]

A. Pentland, B. Moghaddam, T. Starner, “View-based and modular eigenspaces for face recognition,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1994), pp. 84–91.

Pleskot, L. K.

R. R. Carter, L. K. Pleskot, “Imaging scatterometer,” U.S. patent5,912,741 (June1999).

Rao, B. K.

O. G. Cula, K. J. Dana, F. P. Murphy, B. K. Rao, “Skin texture modeling,” Int. J. Comput. Vision (to be published).

Rawlings, D. C.

K. J. Davis, D. C. Rawlings, “Directional reflectometer for measuring optical bidirectional reflectance,” U.S. patent5,637,873 (June10, 1997).

Rushmeier, H.

F. Bernardini, I. Martin, J. Mittleman, H. Rushmeier, G. Taubin, “Building a digital model of Michelangelo’s Florentine Pieta,” IEEE Comput. Graph. Appl. 1(22), 59–67 (2002).
[CrossRef]

F. Bernardini, I. Martin, H. Rushmeier, “High quality texture reconstruction from multiple scans,” IEEE Trans. Visualization Comput. Graph. 7, 318–332 (2001).
[CrossRef]

Sheppard, C. J. R.

Shum, H. Y.

X. Tong, J. Zhang, L. Liu, X. Wang, B. Guo, H. Y. Shum, “Synthesis of bidirectional texture functions on arbitrary surfaces,” ACM Trans. Graph. 21, 665–672 (2002).
[CrossRef]

Starner, T.

A. Pentland, B. Moghaddam, T. Starner, “View-based and modular eigenspaces for face recognition,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1994), pp. 84–91.

Suen, P.

P. Suen, G. Healey, “Analyzing the bidirectional texture function,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1998), pp. 753–758.

Taubin, G.

F. Bernardini, I. Martin, J. Mittleman, H. Rushmeier, G. Taubin, “Building a digital model of Michelangelo’s Florentine Pieta,” IEEE Comput. Graph. Appl. 1(22), 59–67 (2002).
[CrossRef]

Tong, X.

X. Tong, J. Zhang, L. Liu, X. Wang, B. Guo, H. Y. Shum, “Synthesis of bidirectional texture functions on arbitrary surfaces,” ACM Trans. Graph. 21, 665–672 (2002).
[CrossRef]

Torrance, K. E.

S. R. Marschner, S. H. Westin, E. Lafortune, K. E. Torrance, “Image-based bidirectional reflectance distribution function measurement,” Appl. Opt. 39, 2592–2600 (2000).
[CrossRef]

S. C. Foo, K. E. Torrance, “Equipment acquisition for the light measurement laboratory of the Cornell program of computer graphics,” (Program of Computer Graphics, Cornell University, Ithaca, N.Y., 1995).

Turk, M.

M. Turk, A. Pentland, “Eigenfaces for recognition,” J. Cogn. Neurosci. 3, 71–86 (1991).
[CrossRef] [PubMed]

van Doorn, A. J.

J. J. Koenderink, A. J. van Doorn, K. J. Dana, S. K. Nayar, “Bidirectional reflection distribution function of thoroughly pitted surfaces,” Int. J. Comput. Vision 31, 129–144 (1999).
[CrossRef]

van Ginneken, B.

K. J. Dana, B. van Ginneken, S. K. Nayar, J. J. Koenderink, “Reflectance and texture of real world surfaces,” ACM Trans. Graph. 18, 1–34 (1999).
[CrossRef]

B. van Ginneken, J. J. Koenderink, K. J. Dana, “Texture histograms as a function of irradiation and viewing direction,” Int. J. Comput. Vision 31, 169–184 (1999).
[CrossRef]

Van Gool, L.

A. Zalesny, L. Van Gool, “Multiview texture models,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 2001), pp. 615–622.

Varma, M.

M. Varma, A. Zisserman, “Classifying images of materials,” in Proceedings of the European Conference on Computer Vision (Springer-Verlag, Berlin, 2002), pp. 255–271.

Wang, X.

X. Tong, J. Zhang, L. Liu, X. Wang, B. Guo, H. Y. Shum, “Synthesis of bidirectional texture functions on arbitrary surfaces,” ACM Trans. Graph. 21, 665–672 (2002).
[CrossRef]

Ward, G. J.

G. J. Ward, “Measuring and modeling anisotropic reflection,” Comput. Graph. 26, 265–272 (1992).
[CrossRef]

Westin, S. H.

Zalesny, A.

A. Zalesny, L. Van Gool, “Multiview texture models,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 2001), pp. 615–622.

Zhang, J.

X. Tong, J. Zhang, L. Liu, X. Wang, B. Guo, H. Y. Shum, “Synthesis of bidirectional texture functions on arbitrary surfaces,” ACM Trans. Graph. 21, 665–672 (2002).
[CrossRef]

Zisserman, A.

M. Varma, A. Zisserman, “Classifying images of materials,” in Proceedings of the European Conference on Computer Vision (Springer-Verlag, Berlin, 2002), pp. 255–271.

ACM Trans. Graph.

X. Tong, J. Zhang, L. Liu, X. Wang, B. Guo, H. Y. Shum, “Synthesis of bidirectional texture functions on arbitrary surfaces,” ACM Trans. Graph. 21, 665–672 (2002).
[CrossRef]

K. J. Dana, B. van Ginneken, S. K. Nayar, J. J. Koenderink, “Reflectance and texture of real world surfaces,” ACM Trans. Graph. 18, 1–34 (1999).
[CrossRef]

Appl. Opt.

Comput. Graph.

G. J. Ward, “Measuring and modeling anisotropic reflection,” Comput. Graph. 26, 265–272 (1992).
[CrossRef]

IEEE Comput. Graph. Appl.

F. Bernardini, I. Martin, J. Mittleman, H. Rushmeier, G. Taubin, “Building a digital model of Michelangelo’s Florentine Pieta,” IEEE Comput. Graph. Appl. 1(22), 59–67 (2002).
[CrossRef]

IEEE Trans. Visualization Comput. Graph.

F. Bernardini, I. Martin, H. Rushmeier, “High quality texture reconstruction from multiple scans,” IEEE Trans. Visualization Comput. Graph. 7, 318–332 (2001).
[CrossRef]

Int. J. Comput. Vision

H. Murase, S. K. Nayar, “Visual learning and recognition of 3-D objects from appearance,” Int. J. Comput. Vision 14, 5–24 (1995).
[CrossRef]

J. J. Koenderink, A. J. van Doorn, K. J. Dana, S. K. Nayar, “Bidirectional reflection distribution function of thoroughly pitted surfaces,” Int. J. Comput. Vision 31, 129–144 (1999).
[CrossRef]

B. van Ginneken, J. J. Koenderink, K. J. Dana, “Texture histograms as a function of irradiation and viewing direction,” Int. J. Comput. Vision 31, 169–184 (1999).
[CrossRef]

S. Baker, S. K. Nayar, “A theory of single-viewpoint catadioptric image formation,” Int. J. Comput. Vision 35, 175–196 (1999).
[CrossRef]

J. Cogn. Neurosci.

M. Turk, A. Pentland, “Eigenfaces for recognition,” J. Cogn. Neurosci. 3, 71–86 (1991).
[CrossRef] [PubMed]

J. Opt. Soc. Am. A

Other

A. Pentland, B. Moghaddam, T. Starner, “View-based and modular eigenspaces for face recognition,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1994), pp. 84–91.

M. Levoy, K. Pulli, B. Curless, S. Rusinkiewicz, D. Koller, L. Pereira, M. Ginzton, S. Anderson, J. Davis, J. Ginsberg, J. Shade, D. Fulk, “The digital Michelangelo Project: 3D scanning of large statues,” in SIGGRAPH’00, Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, pp. 131–144; www.siggraph.org .

K. J. Davis, D. C. Rawlings, “Directional reflectometer for measuring optical bidirectional reflectance,” U.S. patent5,637,873 (June10, 1997).

P. R. Mattison, M. S. Dombrowski, J. Lorenz, K. Davis, H. Mann, P. Johnson, B. Foos, “Hand-held directional reflectometer: an angular imaging device to measure BRDF and HDR in real-time,” in International Society for Optical Engineering, Scattering and Surface Roughness II, Z. Gu, A. A. Maradudin, eds., Proc. SPIE3426, 240–251 (1998).
[CrossRef]

R. R. Carter, L. K. Pleskot, “Imaging scatterometer,” U.S. patent5,912,741 (June1999).

O. G. Cula, K. J. Dana, F. P. Murphy, B. K. Rao, “Skin texture modeling,” Int. J. Comput. Vision (to be published).

S. E. Chen, L. Williams, “View interpolation for image synthesis,” in SIGGRAPH’93, Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques, pp. 279–288; www.siggraph.org .

L. McMillan, G. Bishop, “Plenoptic modeling: an image-based rendering system,” in SIGGRAPH’95, Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, pp. 39–46; www.siggraph.org .

S. E. Chen, “Quicktime VR—an image-based approach to virtual environment navigation,” in SIGGRAPH’95, Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, pp. 29–39; www.siggraph.org .

M. Levoy, P. Hanrahan, “Light field rendering,” in SIGGRAPH’96, Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, pp. 31–42; www.siggraph.org .

S. J. Gortler, R. Grzeszczuk, R. Szeliski, M. F. Cohen, “The lumigraph,” in SIGGRAPH’96, Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, pp. 43–54; www.siggraph.org .

S. M. Seitz, C. R. Dyer, “View morphing,” in SIGGRAPH’96, Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, pp. 21–30; www.siggraph.org .

P. E. Debevec, C. J. Taylor, J. Malik, “Modeling and rendering architecture from photographs: a hybrid geometry and image-based approach,” in SIGGRAPH’96, Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, pp. 11–20; www.siggraph.org .

S. C. Foo, K. E. Torrance, “Equipment acquisition for the light measurement laboratory of the Cornell program of computer graphics,” (Program of Computer Graphics, Cornell University, Ithaca, N.Y., 1995).

X. Liu, Y. Yu, H.-Y. Shum, “Synthesizing bidirectional texture functions for real world surfaces,” in SIGGRAPH’01, Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, pp. 97–106; www.siggraph.org .

K. J. Dana, S. K. Nayar, “Histogram model for 3D textures,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1998), pp. 618–624.

K. J. Dana, S. K. Nayar, “3D textured surface modeling,” in Workshop on the Integration of Appearance and Geometric Methods in Object Recognition (Institute of Electrical and Electronics Engineers, New York, 1999), pp. 46–56.

K. J. Dana, S. K. Nayar, “Correlation model for 3D texture,” in Proceedings of the International Conference on Computer Vision (IEEE Computer Society Press, Los Alamitos, Calif., 1991), pp. 1061–1067.

T. Leung, J. Malik, “Recognizing surfaces using three-dimensional textons,” in Proceedings of the International Conference on Computer Vision (IEEE Computer Society Press, Los Alamitos, Calif., 1999), pp. 1010–1017.

P. Suen, G. Healey, “Analyzing the bidirectional texture function,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1998), pp. 753–758.

O. G. Cula, K. J. Dana, “Recognition methods for 3D textured surfaces,” in Human Vision and Electronic Imaging VI, B. E. Rogowitz, T. N. Pappas, eds., Proc. SPIE4299, 209–220 (2001).
[CrossRef]

O. G. Cula, K. J. Dana, “Compact representation of bidirectional texture functions,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 2001), pp. 1041–1067.

A. Zalesny, L. Van Gool, “Multiview texture models,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 2001), pp. 615–622.

M. Varma, A. Zisserman, “Classifying images of materials,” in Proceedings of the European Conference on Computer Vision (Springer-Verlag, Berlin, 2002), pp. 255–271.

K. J. Dana, “BRDF/BTF measurement device,” in Proceedings of the International Conference on Computer Vision (IEEE Computer Society Press, Los Alamitos, Calif., 2001), pp. 460–466.

S. K. Nayar, “Catadioptric omnidirectional camera,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1997), pp. 482–488.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (24)

Fig. 1
Fig. 1

Three images of a rough plastered surface observed with different viewing and illumination directions.

Fig. 2
Fig. 2

The focusing property of a concave parabolic mirror is exploited to simultaneously measure reflected rays from a large range of angles over the hemisphere. The same mirror is used to direct the incident illumination ray to the sample at the desired angle.

Fig. 3
Fig. 3

BRDF/BTF measurement device. The surface point is imaged by a CCD video camera observing an off-axis concave parabolic mirror to achieve simultaneous observation of a large range of viewing directions. Illumination direction is controlled by an aperture; i.e., translations of the aperture in the XZ plane cause variations in the illumination angle incident on the surface point. The device achieves viewing/illumination direction variations by using simple translations of the illumination aperture instead of complex gonioreflectometer equipment. Measurements of bidirectional texture are accomplished by translating the mirror in the XY plane.

Fig. 4
Fig. 4

Diagram of an extended concave parabolic mirror. The parameter δ can be increased to increase θm. However, as θm approaches 90°, δ becomes infinitely large.

Fig. 5
Fig. 5

Cross section of the concave parabolic mirror used in the imaging device. The off-axis mirror section extends from point A to point B. Light parallel to the mirror axis, i.e., the Y axis, is focused to point O.

Fig. 6
Fig. 6

Off-axis concave parabolic mirror used in the prototype with a sample (glossy blue cardboard) at focus.

Fig. 7
Fig. 7

Device prototype including camera, illumination source, collimating lens assembly, illumination aperture, beam splitter, and off-axis concave parabolic mirror. A blue specular sample to be measured is shown.

Fig. 8
Fig. 8

The radius of the illumination aperture is given by r. Depending on where the illumination intersects the mirror, the solid angle of illumination varies. Compare the angle of incident illumination when the light rays hit the top of the mirror with the corresponding angle when the light rays hit the bottom of the mirror. Equivalently, compare the arc ab with the arc cd. Clearly, the solid angle of illumination is larger at the bottom of the mirror than at the top.

Fig. 9
Fig. 9

Illustration of the illumination angles. Because the illumination aperture is not infinitesimally small, there is a bundle of rays illuminating the surface characterized by the angle α. In general, the illumination is oblique, with the off-axis angle given by μ.

Fig. 10
Fig. 10

Estimated cone angle in radians as a function of z coordinate of P (the intersection of the illuminating ray with the mirror).

Fig. 11
Fig. 11

Cross-section image of the USAF 1951 standard test pattern with three black lines at 8 line pairs/mm (left) and 4.49 line pairs/mm (right). Eleven different illumination directions are used as indicated, and the plots illustrate the change in resolution with illumination direction. These plots indicate that a line width of 1/16 mm can be resolved for some illumination angles and that a line width of approximately 1/9 mm can be resolved for all illumination angles. (The 11 illumination directions are chosen from the center of the mirror, so only the polar angle θi varies and the azimuth ϕi is 90°. The viewing angle θv is 30°, and ϕv is 90°.)

Fig. 12
Fig. 12

Spatial blurring as a function of illumination angle. The line spread function of the imaging device is modeled as a Gaussian function. The amount of spatial blurring is measured by estimating the standard deviation σ of the Gaussian blur applied to an ideal line to match the measured line. Note that smaller standard deviation corresponds to better resolution. As expected, the resolution is best for smaller values of the z coordinate, which correspond to a larger cone angle of illumination.

Fig. 13
Fig. 13

Cross-section image of the USAF 1951 standard test pattern with three black lines at 8 line pairs/mm (left) and 4.49 line pairs/mm (right). Four different viewing angles are used as indicated, and the plots illustrate that there is essentially no change in resolution with viewing direction. (The viewing directions are chosen from the center of the mirror, so only the polar angle varies and the azimuth is 90°. The illumination angle θi is -19.7°, and ϕi is 90°.)

Fig. 14
Fig. 14

Illustration of energy distribution. Some rays within the solid angle dν hit the mirror point P and then project to the camera with the effective area ds.

Fig. 15
Fig. 15

Camera image of chalk. Note that each pixel corresponds to a different viewing direction (the illumination direction is fixed). For a Lambertian object such as chalk, intensity increases toward the bottom of the mirror, which corresponds to the smallest ρ.

Fig. 16
Fig. 16

One vertical line from the center of the camera image of chalk depicted in Fig. 15. For a Lambertian object such as chalk, intensity increases toward the bottom of the mirror, which corresponds to the smallest ρ. The measured and modeled intensities are shown, where the modeled intensity is shown by the dotted curve and is given by Eq. (12).

Fig. 17
Fig. 17

Definition of relevant angles for the analysis of specular reflection. The polar angle is θ, and the azimuthal angle is ϕ.

Fig. 18
Fig. 18

Curves of constant θ and constant ϕ. The gray circle represents the size of the projected image. Each black ellipse represents curves of constant θ  (θ=5° to 25°, shown from innermost to outermost with step size 5°). The dashed radiating lines are curves of constant ϕ (ϕ=60°, 70°,…, shown increasing clockwise with step size 10°).

Fig. 19
Fig. 19

Predicted specularity trajectories as the illuminated point P moves horizontally or vertically. The horizontal curves are the paths of the peak specular reflection when P moves horizontally (along X), while the vertical curves correspond to the path when P moves vertically (along Z).

Fig. 20
Fig. 20

Observed camera images for a glossy blue cardboard sample. Each image shows a different illuminated point P. The relevant coordinates of P are denoted by x, z; ψ=0.0025 mm is the stage step size. Top left, x=0, z=2F-1000ψ; top right, x=0, z=2F+1000ψ; bottom left, x=-1619ψ, z=2F; bottom right, x=1619ψ, z=2F.

Fig. 21
Fig. 21

Demonstration of simultaneous spatial imaging (capturing image texture) and angular imaging (capturing reflectance as a function of imaging angles). Top row: texture image from a horizontal and vertical standard line pattern (Ronchi ruling, 1 line/mm). The illumination direction is frontal, and the viewing direction is given by θv=17°, ϕv=90°. Bottom row: camera image showing reflectance as a function of viewing direction for a black surface point (left) and a white surface point (right). The illumination direction is frontal, and the expected specularity for a glossy surface is visible.

Fig. 22
Fig. 22

Texture images for sample surfaces: (in columns from left to right) canvas, rough plastic, rubber mat, and leather. Each image corresponds to a frontal view (i.e., θv=0). The width is 10.5 mm, and the height is 8.25 mm. The top row corresponds to θi=12.1°. For the middle row, θi=0. For the bottom row, θi=-15.3°. For each image, ϕi=0; i.e., the illuminating ray intersects the mirror in the yz plane.

Fig. 23
Fig. 23

Texture images for sample surfaces: (in columns from left to right) canvas, rough plastic, rubber mat, and leather. Each image corresponds to a frontal illumination (i.e., θi=0). The width is 10.5 mm, and the height is 8.25 mm. The top row corresponds to θv=22°. For the middle row, θv=0. For the bottom row, θv=-22°. For each image, ϕv=0; i.e., the point of interest on the mirror is in the yz plane. Note that the brightness of the top row has been manually enhanced so that structure is better visible. The brightness is higher for the bottom row because of the effect illustrated in Fig. 15.

Fig. 24
Fig. 24

Images from the camera as the mirror focuses on skin surface. When the focal point is above the surface, surrounding texture is discernible. The images from left to right depict the camera view as the focal point approaches the surface. These views can be used to navigate the mirror focus to the desired surface point.

Equations (26)

Equations on this page are rendered with MathJax. Learn more.

y+F=z2+x24F,
θm=π2-arctan4F(δ+F)δ21/2.
zB=12.7=F,
yB=zB2+xB24F-F=-34F.
θ1=arctan3436.9°.
zA=3×12.7=3F,
yA=zA2+xA24F-F=54F.
θ2=arctan(5/12)22.6°.
xs=-x/p+Cx,
ys=-(z-2F)/p+Cy,
NA=n sin α2,
R=λ2NA,
α(z)=arctan z-d/2(z-d/2)24F-F-arctan z+d/2(z+d/2)24F-F.
dAdν=4F2+z24F2M2,
I=cνρ2dν=cρ2,
θ=arccos z(x2+y2+z2)1/2
ϕ=arccos x(x2+y2)1/2.
cos ϕ=x(x2+y2)1/2,
sin ϕ=y(x2+y2)1/2,
sin θ=(x2+y2)1/2(x2+y2+z2)1/2.
sin θ=(x2+y2)1/2(y2+4Fy+4F2)1/2=(x2+y2)1/2|y+2F|=|y||y+2F| xy2+11/2=|y||y+2F| cos ϕsin ϕ2+11/2=|y||y+2F||sin ϕ|.
(sin θ)|sin ϕ|=|y|y+2F.
sin θ sin ϕ=yy+2F,
y=2F1/(sin θ sin ϕ)-1,
x=ytan ϕ,
z=[4F(y+F)-x2]1/2.

Metrics