Abstract

Three-dimensional displays have become increasingly present in consumer markets. However, the ability to capture three-dimensional images in space confined environments and without major modifications to current cameras is uncommon. Our goal is to create a simple modification to a conventional camera that allows for three dimensional reconstruction. We require such an imaging system have imaging and illumination paths coincident. Furthermore, we require that any three-dimensional modification to a camera also permits full resolution 2D image capture.

Here we present a method of extracting depth information with a single camera and aberrated projected pattern. A commercial digital camera is used in conjunction with a projector system with astigmatic focus to capture images of a scene. By using an astigmatic projected pattern we can create two different focus depths for horizontal and vertical features of a projected pattern, thereby encoding depth. By designing an aberrated projected pattern, we are able to exploit this differential focus in post-processing designed to exploit the projected pattern and optical system. We are able to correlate the distance of an object at a particular transverse position from the camera to ratios of particular wavelet coefficients.

We present our information regarding construction, calibration, and images produced by this system. The nature of linking a projected pattern design and image processing algorithms will be discussed.

© 2012 OSA

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. A. Rajagopalan and S. Chaudhuri, “A variational approach to recovering depth from defocused images,” IEEE Trans. Pattern Anal. Mach. Intell.19, 1158 –1164 (1997).
    [CrossRef]
  2. V. Aslantas and D. T. Pham, “Depth from automatic defocusing,” Opt. Express15, 1011–1023 (2007).
    [CrossRef] [PubMed]
  3. K. Atanassov, V. Ramachandra, S. R. Goma, and M. Aleksic, “3D image processing architecture for camera phones,” Proc. SPIE7864, 786414 (2011).
    [CrossRef]
  4. J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photon.3, 128–160 (2011).
    [CrossRef]
  5. D. Falie and L. C. Ciobotaru, “Modified time of flight camera 3D-images improving method,” J. Optoelectron. Adv. Mater.4, 136–140 (2010).
  6. E.P. Baltsavias, “Airborne laser scanning: basic relations and formulas,” ISPRS J. Photogramm. Remote Sens.54, 199 – 214 (1999).
    [CrossRef]
  7. C. Wang, T. Chang, and M. Yuen, “From laser-scanned data to feature human model: a system based on fuzzy logic concept,” Comput.-Aided Des.35, 241–253 (2003).
    [CrossRef]
  8. L. Yanga, P. Zhang, S. Liu, P. R. Samala, M. Su, and H. Yokota, “Measurement of strain distributions in mouse femora with 3D-digital speckle pattern interferometry,” Opt. Laser Eng.45, 843–851 (2007).
    [CrossRef]
  9. A. Anand, V. K. Chhaniwal, P. Almoro, G. Pedrini, and W. Osten, “Shape and deformation measurements of 3D objects using volume speckle field and phase retrieval,” Opt. Lett.34, 1522–1524 (2009).
    [CrossRef] [PubMed]
  10. C. Chen, Y. Hung, C. Chiang, and J. Wu, “Range data acquisition using color structured lighting and stereo vision,” Image Vis. Comput.15, 445–456 (1997).
    [CrossRef]
  11. Z. Kiraly, G. Springer, and J. Van Dam, “Stereoscopic vision system,” Opt. Eng.45, 043006 (2006).
    [CrossRef]
  12. M.-C. Park, S. J. Park, and J.-Y. Son, “Stereoscopic imaging and display for a 3-d mobile phone,” Appl. Opt.48, H238–H243 (2009).
    [CrossRef] [PubMed]
  13. M. Subbarao and G. Surya, “Depth from defocus - A spatial domain approach,” Int. J. Comput. Vis.13, 271–294 (1994).
    [CrossRef]
  14. M. Watanabe and S. Nayar, “Rational filters for passive depth from defocus,” Int. J. Comput. Vis.27, 203–225 (1998).
    [CrossRef]
  15. E. Adelson and J. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell.14, 99–106 (1992).
    [CrossRef]
  16. M. Magee, R. Weniger, and E. Franke, “Location of features of known height in the presence of reflective and refractive noise using a stereoscopic light-striping approach,” Opt. Eng.33, 1092–1098 (1994).
    [CrossRef]
  17. S. Alibhai and S. Zucker, “Contour-based correspondence for stereo,” in “Comp. Vis. Proceedings,”, D Vernon, ed.
  18. R. Garcia, J. Batlle, and J. Salvi, “A new approach to pose detection using a trinocular stereovision system,” Real-Time Imag.8, 73–93 (2002).
    [CrossRef]
  19. R. Koch, M. Pollefeys, and L. Van Gool, “Realistic surface reconstruction of 3D scenes from uncalibrated image sequences,” J. Vis. Comput. Anim.11, 115–127 (2000).
    [CrossRef]
  20. A. Levin, S. W. Hasinoff, P. Green, F. Durand, and W. T. Freeman, “4D frequency analysis of computational cameras for depth of field extension,” ACM Trans. Graph.28, 97 (2009).
    [CrossRef]
  21. T. G. Georgiev and A. Lumsdaine, “Resolution in plenoptic cameras,” Computational Optical Sensing and Imaging, (OSA, 2009).
  22. T. G. Georgiev and A. Lumsdaine, “Superresolution with plenoptic 2.0 cameras,” Signal recovery and synthesis, (OSA, 2009).
  23. T. G. Georgiev, A. Lumsdaine, and S. Goma, “High dynamic range image capture with plenoptic 2.0 camera,” Signal recovery and synthesis, (OSA, 2009).
  24. R. Valkenburg and A. McIvor, “Accurate 3D measurement using a structured light system,” Image Vis. Comput.16, 99–110 (1998).
    [CrossRef]
  25. H. J. Chen, J. Zhang, and J. Fang, “Surface height retrieval based on fringe shifting of color-encoded structured light pattern,” Opt. Lett.33, 1801–1803 (2008).
    [CrossRef] [PubMed]
  26. Y. Caulier, “Inspection of complex surfaces by means of structured light patterns,” Opt. Express18, 6642–6660 (2010).
    [CrossRef] [PubMed]
  27. C. Guan, L. Hassebrook, and D. Lau, “Composite structured light pattern for three-dimensional video,” Opt. Express11, 406–417 (2003).
    [CrossRef] [PubMed]
  28. J. Pages, J. Salvi, C. Collewet, and J. Forest, “Optimised De Bruijn patterns for one-shot shape acquisition,” Image Vis. Comput.23, 707–720 (2005).
    [CrossRef]
  29. C. Beumier and M. Acheroy, “Automatic 3D face authentication,” Image Vis. Comput.18, 315–321 (2000).
    [CrossRef]
  30. A. Rajagopalan and S. Chaudhuri, “A variational approach to recovering depth from defocused images,” IEEE Trans. Pattern. Anal. Mach. Intell.19, 1158–1164 (1997).
    [CrossRef]
  31. L. Vincent and P. Soille, “Watersheds in digital spaces- an efficient algorithm based on immersion simulations,” IEEE Trans. Pattern. Anal. Mach. Intell.13, 583–598 (1991).
    [CrossRef]
  32. F. Meyer, “Topographic distance and watershed lines,” Signal Process.38, 113–125 (1994).
    [CrossRef]

2011

2010

Y. Caulier, “Inspection of complex surfaces by means of structured light patterns,” Opt. Express18, 6642–6660 (2010).
[CrossRef] [PubMed]

D. Falie and L. C. Ciobotaru, “Modified time of flight camera 3D-images improving method,” J. Optoelectron. Adv. Mater.4, 136–140 (2010).

2009

2008

2007

V. Aslantas and D. T. Pham, “Depth from automatic defocusing,” Opt. Express15, 1011–1023 (2007).
[CrossRef] [PubMed]

L. Yanga, P. Zhang, S. Liu, P. R. Samala, M. Su, and H. Yokota, “Measurement of strain distributions in mouse femora with 3D-digital speckle pattern interferometry,” Opt. Laser Eng.45, 843–851 (2007).
[CrossRef]

2006

Z. Kiraly, G. Springer, and J. Van Dam, “Stereoscopic vision system,” Opt. Eng.45, 043006 (2006).
[CrossRef]

2005

J. Pages, J. Salvi, C. Collewet, and J. Forest, “Optimised De Bruijn patterns for one-shot shape acquisition,” Image Vis. Comput.23, 707–720 (2005).
[CrossRef]

2003

C. Wang, T. Chang, and M. Yuen, “From laser-scanned data to feature human model: a system based on fuzzy logic concept,” Comput.-Aided Des.35, 241–253 (2003).
[CrossRef]

C. Guan, L. Hassebrook, and D. Lau, “Composite structured light pattern for three-dimensional video,” Opt. Express11, 406–417 (2003).
[CrossRef] [PubMed]

2002

R. Garcia, J. Batlle, and J. Salvi, “A new approach to pose detection using a trinocular stereovision system,” Real-Time Imag.8, 73–93 (2002).
[CrossRef]

2000

R. Koch, M. Pollefeys, and L. Van Gool, “Realistic surface reconstruction of 3D scenes from uncalibrated image sequences,” J. Vis. Comput. Anim.11, 115–127 (2000).
[CrossRef]

C. Beumier and M. Acheroy, “Automatic 3D face authentication,” Image Vis. Comput.18, 315–321 (2000).
[CrossRef]

1999

E.P. Baltsavias, “Airborne laser scanning: basic relations and formulas,” ISPRS J. Photogramm. Remote Sens.54, 199 – 214 (1999).
[CrossRef]

1998

M. Watanabe and S. Nayar, “Rational filters for passive depth from defocus,” Int. J. Comput. Vis.27, 203–225 (1998).
[CrossRef]

R. Valkenburg and A. McIvor, “Accurate 3D measurement using a structured light system,” Image Vis. Comput.16, 99–110 (1998).
[CrossRef]

1997

A. Rajagopalan and S. Chaudhuri, “A variational approach to recovering depth from defocused images,” IEEE Trans. Pattern Anal. Mach. Intell.19, 1158 –1164 (1997).
[CrossRef]

C. Chen, Y. Hung, C. Chiang, and J. Wu, “Range data acquisition using color structured lighting and stereo vision,” Image Vis. Comput.15, 445–456 (1997).
[CrossRef]

A. Rajagopalan and S. Chaudhuri, “A variational approach to recovering depth from defocused images,” IEEE Trans. Pattern. Anal. Mach. Intell.19, 1158–1164 (1997).
[CrossRef]

1994

M. Magee, R. Weniger, and E. Franke, “Location of features of known height in the presence of reflective and refractive noise using a stereoscopic light-striping approach,” Opt. Eng.33, 1092–1098 (1994).
[CrossRef]

M. Subbarao and G. Surya, “Depth from defocus - A spatial domain approach,” Int. J. Comput. Vis.13, 271–294 (1994).
[CrossRef]

F. Meyer, “Topographic distance and watershed lines,” Signal Process.38, 113–125 (1994).
[CrossRef]

1992

E. Adelson and J. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell.14, 99–106 (1992).
[CrossRef]

1991

L. Vincent and P. Soille, “Watersheds in digital spaces- an efficient algorithm based on immersion simulations,” IEEE Trans. Pattern. Anal. Mach. Intell.13, 583–598 (1991).
[CrossRef]

Acheroy, M.

C. Beumier and M. Acheroy, “Automatic 3D face authentication,” Image Vis. Comput.18, 315–321 (2000).
[CrossRef]

Adelson, E.

E. Adelson and J. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell.14, 99–106 (1992).
[CrossRef]

Aleksic, M.

K. Atanassov, V. Ramachandra, S. R. Goma, and M. Aleksic, “3D image processing architecture for camera phones,” Proc. SPIE7864, 786414 (2011).
[CrossRef]

Alibhai, S.

S. Alibhai and S. Zucker, “Contour-based correspondence for stereo,” in “Comp. Vis. Proceedings,”, D Vernon, ed.

Almoro, P.

Anand, A.

Aslantas, V.

Atanassov, K.

K. Atanassov, V. Ramachandra, S. R. Goma, and M. Aleksic, “3D image processing architecture for camera phones,” Proc. SPIE7864, 786414 (2011).
[CrossRef]

Baltsavias, E.P.

E.P. Baltsavias, “Airborne laser scanning: basic relations and formulas,” ISPRS J. Photogramm. Remote Sens.54, 199 – 214 (1999).
[CrossRef]

Batlle, J.

R. Garcia, J. Batlle, and J. Salvi, “A new approach to pose detection using a trinocular stereovision system,” Real-Time Imag.8, 73–93 (2002).
[CrossRef]

Beumier, C.

C. Beumier and M. Acheroy, “Automatic 3D face authentication,” Image Vis. Comput.18, 315–321 (2000).
[CrossRef]

Caulier, Y.

Chang, T.

C. Wang, T. Chang, and M. Yuen, “From laser-scanned data to feature human model: a system based on fuzzy logic concept,” Comput.-Aided Des.35, 241–253 (2003).
[CrossRef]

Chaudhuri, S.

A. Rajagopalan and S. Chaudhuri, “A variational approach to recovering depth from defocused images,” IEEE Trans. Pattern. Anal. Mach. Intell.19, 1158–1164 (1997).
[CrossRef]

A. Rajagopalan and S. Chaudhuri, “A variational approach to recovering depth from defocused images,” IEEE Trans. Pattern Anal. Mach. Intell.19, 1158 –1164 (1997).
[CrossRef]

Chen, C.

C. Chen, Y. Hung, C. Chiang, and J. Wu, “Range data acquisition using color structured lighting and stereo vision,” Image Vis. Comput.15, 445–456 (1997).
[CrossRef]

Chen, H. J.

Chhaniwal, V. K.

Chiang, C.

C. Chen, Y. Hung, C. Chiang, and J. Wu, “Range data acquisition using color structured lighting and stereo vision,” Image Vis. Comput.15, 445–456 (1997).
[CrossRef]

Ciobotaru, L. C.

D. Falie and L. C. Ciobotaru, “Modified time of flight camera 3D-images improving method,” J. Optoelectron. Adv. Mater.4, 136–140 (2010).

Collewet, C.

J. Pages, J. Salvi, C. Collewet, and J. Forest, “Optimised De Bruijn patterns for one-shot shape acquisition,” Image Vis. Comput.23, 707–720 (2005).
[CrossRef]

Durand, F.

A. Levin, S. W. Hasinoff, P. Green, F. Durand, and W. T. Freeman, “4D frequency analysis of computational cameras for depth of field extension,” ACM Trans. Graph.28, 97 (2009).
[CrossRef]

Falie, D.

D. Falie and L. C. Ciobotaru, “Modified time of flight camera 3D-images improving method,” J. Optoelectron. Adv. Mater.4, 136–140 (2010).

Fang, J.

Forest, J.

J. Pages, J. Salvi, C. Collewet, and J. Forest, “Optimised De Bruijn patterns for one-shot shape acquisition,” Image Vis. Comput.23, 707–720 (2005).
[CrossRef]

Franke, E.

M. Magee, R. Weniger, and E. Franke, “Location of features of known height in the presence of reflective and refractive noise using a stereoscopic light-striping approach,” Opt. Eng.33, 1092–1098 (1994).
[CrossRef]

Freeman, W. T.

A. Levin, S. W. Hasinoff, P. Green, F. Durand, and W. T. Freeman, “4D frequency analysis of computational cameras for depth of field extension,” ACM Trans. Graph.28, 97 (2009).
[CrossRef]

Garcia, R.

R. Garcia, J. Batlle, and J. Salvi, “A new approach to pose detection using a trinocular stereovision system,” Real-Time Imag.8, 73–93 (2002).
[CrossRef]

Geng, J.

Georgiev, T. G.

T. G. Georgiev and A. Lumsdaine, “Resolution in plenoptic cameras,” Computational Optical Sensing and Imaging, (OSA, 2009).

T. G. Georgiev and A. Lumsdaine, “Superresolution with plenoptic 2.0 cameras,” Signal recovery and synthesis, (OSA, 2009).

T. G. Georgiev, A. Lumsdaine, and S. Goma, “High dynamic range image capture with plenoptic 2.0 camera,” Signal recovery and synthesis, (OSA, 2009).

Goma, S.

T. G. Georgiev, A. Lumsdaine, and S. Goma, “High dynamic range image capture with plenoptic 2.0 camera,” Signal recovery and synthesis, (OSA, 2009).

Goma, S. R.

K. Atanassov, V. Ramachandra, S. R. Goma, and M. Aleksic, “3D image processing architecture for camera phones,” Proc. SPIE7864, 786414 (2011).
[CrossRef]

Green, P.

A. Levin, S. W. Hasinoff, P. Green, F. Durand, and W. T. Freeman, “4D frequency analysis of computational cameras for depth of field extension,” ACM Trans. Graph.28, 97 (2009).
[CrossRef]

Guan, C.

Hasinoff, S. W.

A. Levin, S. W. Hasinoff, P. Green, F. Durand, and W. T. Freeman, “4D frequency analysis of computational cameras for depth of field extension,” ACM Trans. Graph.28, 97 (2009).
[CrossRef]

Hassebrook, L.

Hung, Y.

C. Chen, Y. Hung, C. Chiang, and J. Wu, “Range data acquisition using color structured lighting and stereo vision,” Image Vis. Comput.15, 445–456 (1997).
[CrossRef]

Kiraly, Z.

Z. Kiraly, G. Springer, and J. Van Dam, “Stereoscopic vision system,” Opt. Eng.45, 043006 (2006).
[CrossRef]

Koch, R.

R. Koch, M. Pollefeys, and L. Van Gool, “Realistic surface reconstruction of 3D scenes from uncalibrated image sequences,” J. Vis. Comput. Anim.11, 115–127 (2000).
[CrossRef]

Lau, D.

Levin, A.

A. Levin, S. W. Hasinoff, P. Green, F. Durand, and W. T. Freeman, “4D frequency analysis of computational cameras for depth of field extension,” ACM Trans. Graph.28, 97 (2009).
[CrossRef]

Liu, S.

L. Yanga, P. Zhang, S. Liu, P. R. Samala, M. Su, and H. Yokota, “Measurement of strain distributions in mouse femora with 3D-digital speckle pattern interferometry,” Opt. Laser Eng.45, 843–851 (2007).
[CrossRef]

Lumsdaine, A.

T. G. Georgiev, A. Lumsdaine, and S. Goma, “High dynamic range image capture with plenoptic 2.0 camera,” Signal recovery and synthesis, (OSA, 2009).

T. G. Georgiev and A. Lumsdaine, “Superresolution with plenoptic 2.0 cameras,” Signal recovery and synthesis, (OSA, 2009).

T. G. Georgiev and A. Lumsdaine, “Resolution in plenoptic cameras,” Computational Optical Sensing and Imaging, (OSA, 2009).

Magee, M.

M. Magee, R. Weniger, and E. Franke, “Location of features of known height in the presence of reflective and refractive noise using a stereoscopic light-striping approach,” Opt. Eng.33, 1092–1098 (1994).
[CrossRef]

McIvor, A.

R. Valkenburg and A. McIvor, “Accurate 3D measurement using a structured light system,” Image Vis. Comput.16, 99–110 (1998).
[CrossRef]

Meyer, F.

F. Meyer, “Topographic distance and watershed lines,” Signal Process.38, 113–125 (1994).
[CrossRef]

Nayar, S.

M. Watanabe and S. Nayar, “Rational filters for passive depth from defocus,” Int. J. Comput. Vis.27, 203–225 (1998).
[CrossRef]

Osten, W.

Pages, J.

J. Pages, J. Salvi, C. Collewet, and J. Forest, “Optimised De Bruijn patterns for one-shot shape acquisition,” Image Vis. Comput.23, 707–720 (2005).
[CrossRef]

Park, M.-C.

Park, S. J.

Pedrini, G.

Pham, D. T.

Pollefeys, M.

R. Koch, M. Pollefeys, and L. Van Gool, “Realistic surface reconstruction of 3D scenes from uncalibrated image sequences,” J. Vis. Comput. Anim.11, 115–127 (2000).
[CrossRef]

Rajagopalan, A.

A. Rajagopalan and S. Chaudhuri, “A variational approach to recovering depth from defocused images,” IEEE Trans. Pattern. Anal. Mach. Intell.19, 1158–1164 (1997).
[CrossRef]

A. Rajagopalan and S. Chaudhuri, “A variational approach to recovering depth from defocused images,” IEEE Trans. Pattern Anal. Mach. Intell.19, 1158 –1164 (1997).
[CrossRef]

Ramachandra, V.

K. Atanassov, V. Ramachandra, S. R. Goma, and M. Aleksic, “3D image processing architecture for camera phones,” Proc. SPIE7864, 786414 (2011).
[CrossRef]

Salvi, J.

J. Pages, J. Salvi, C. Collewet, and J. Forest, “Optimised De Bruijn patterns for one-shot shape acquisition,” Image Vis. Comput.23, 707–720 (2005).
[CrossRef]

R. Garcia, J. Batlle, and J. Salvi, “A new approach to pose detection using a trinocular stereovision system,” Real-Time Imag.8, 73–93 (2002).
[CrossRef]

Samala, P. R.

L. Yanga, P. Zhang, S. Liu, P. R. Samala, M. Su, and H. Yokota, “Measurement of strain distributions in mouse femora with 3D-digital speckle pattern interferometry,” Opt. Laser Eng.45, 843–851 (2007).
[CrossRef]

Soille, P.

L. Vincent and P. Soille, “Watersheds in digital spaces- an efficient algorithm based on immersion simulations,” IEEE Trans. Pattern. Anal. Mach. Intell.13, 583–598 (1991).
[CrossRef]

Son, J.-Y.

Springer, G.

Z. Kiraly, G. Springer, and J. Van Dam, “Stereoscopic vision system,” Opt. Eng.45, 043006 (2006).
[CrossRef]

Su, M.

L. Yanga, P. Zhang, S. Liu, P. R. Samala, M. Su, and H. Yokota, “Measurement of strain distributions in mouse femora with 3D-digital speckle pattern interferometry,” Opt. Laser Eng.45, 843–851 (2007).
[CrossRef]

Subbarao, M.

M. Subbarao and G. Surya, “Depth from defocus - A spatial domain approach,” Int. J. Comput. Vis.13, 271–294 (1994).
[CrossRef]

Surya, G.

M. Subbarao and G. Surya, “Depth from defocus - A spatial domain approach,” Int. J. Comput. Vis.13, 271–294 (1994).
[CrossRef]

Valkenburg, R.

R. Valkenburg and A. McIvor, “Accurate 3D measurement using a structured light system,” Image Vis. Comput.16, 99–110 (1998).
[CrossRef]

Van Dam, J.

Z. Kiraly, G. Springer, and J. Van Dam, “Stereoscopic vision system,” Opt. Eng.45, 043006 (2006).
[CrossRef]

Van Gool, L.

R. Koch, M. Pollefeys, and L. Van Gool, “Realistic surface reconstruction of 3D scenes from uncalibrated image sequences,” J. Vis. Comput. Anim.11, 115–127 (2000).
[CrossRef]

Vernon, D

S. Alibhai and S. Zucker, “Contour-based correspondence for stereo,” in “Comp. Vis. Proceedings,”, D Vernon, ed.

Vincent, L.

L. Vincent and P. Soille, “Watersheds in digital spaces- an efficient algorithm based on immersion simulations,” IEEE Trans. Pattern. Anal. Mach. Intell.13, 583–598 (1991).
[CrossRef]

Wang, C.

C. Wang, T. Chang, and M. Yuen, “From laser-scanned data to feature human model: a system based on fuzzy logic concept,” Comput.-Aided Des.35, 241–253 (2003).
[CrossRef]

Wang, J.

E. Adelson and J. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell.14, 99–106 (1992).
[CrossRef]

Watanabe, M.

M. Watanabe and S. Nayar, “Rational filters for passive depth from defocus,” Int. J. Comput. Vis.27, 203–225 (1998).
[CrossRef]

Weniger, R.

M. Magee, R. Weniger, and E. Franke, “Location of features of known height in the presence of reflective and refractive noise using a stereoscopic light-striping approach,” Opt. Eng.33, 1092–1098 (1994).
[CrossRef]

Wu, J.

C. Chen, Y. Hung, C. Chiang, and J. Wu, “Range data acquisition using color structured lighting and stereo vision,” Image Vis. Comput.15, 445–456 (1997).
[CrossRef]

Yanga, L.

L. Yanga, P. Zhang, S. Liu, P. R. Samala, M. Su, and H. Yokota, “Measurement of strain distributions in mouse femora with 3D-digital speckle pattern interferometry,” Opt. Laser Eng.45, 843–851 (2007).
[CrossRef]

Yokota, H.

L. Yanga, P. Zhang, S. Liu, P. R. Samala, M. Su, and H. Yokota, “Measurement of strain distributions in mouse femora with 3D-digital speckle pattern interferometry,” Opt. Laser Eng.45, 843–851 (2007).
[CrossRef]

Yuen, M.

C. Wang, T. Chang, and M. Yuen, “From laser-scanned data to feature human model: a system based on fuzzy logic concept,” Comput.-Aided Des.35, 241–253 (2003).
[CrossRef]

Zhang, J.

Zhang, P.

L. Yanga, P. Zhang, S. Liu, P. R. Samala, M. Su, and H. Yokota, “Measurement of strain distributions in mouse femora with 3D-digital speckle pattern interferometry,” Opt. Laser Eng.45, 843–851 (2007).
[CrossRef]

Zucker, S.

S. Alibhai and S. Zucker, “Contour-based correspondence for stereo,” in “Comp. Vis. Proceedings,”, D Vernon, ed.

ACM Trans. Graph.

A. Levin, S. W. Hasinoff, P. Green, F. Durand, and W. T. Freeman, “4D frequency analysis of computational cameras for depth of field extension,” ACM Trans. Graph.28, 97 (2009).
[CrossRef]

Adv. Opt. Photon.

Appl. Opt.

Comput.-Aided Des.

C. Wang, T. Chang, and M. Yuen, “From laser-scanned data to feature human model: a system based on fuzzy logic concept,” Comput.-Aided Des.35, 241–253 (2003).
[CrossRef]

IEEE Trans. Pattern Anal. Mach. Intell.

A. Rajagopalan and S. Chaudhuri, “A variational approach to recovering depth from defocused images,” IEEE Trans. Pattern Anal. Mach. Intell.19, 1158 –1164 (1997).
[CrossRef]

E. Adelson and J. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell.14, 99–106 (1992).
[CrossRef]

IEEE Trans. Pattern. Anal. Mach. Intell.

A. Rajagopalan and S. Chaudhuri, “A variational approach to recovering depth from defocused images,” IEEE Trans. Pattern. Anal. Mach. Intell.19, 1158–1164 (1997).
[CrossRef]

L. Vincent and P. Soille, “Watersheds in digital spaces- an efficient algorithm based on immersion simulations,” IEEE Trans. Pattern. Anal. Mach. Intell.13, 583–598 (1991).
[CrossRef]

Image Vis. Comput.

R. Valkenburg and A. McIvor, “Accurate 3D measurement using a structured light system,” Image Vis. Comput.16, 99–110 (1998).
[CrossRef]

J. Pages, J. Salvi, C. Collewet, and J. Forest, “Optimised De Bruijn patterns for one-shot shape acquisition,” Image Vis. Comput.23, 707–720 (2005).
[CrossRef]

C. Beumier and M. Acheroy, “Automatic 3D face authentication,” Image Vis. Comput.18, 315–321 (2000).
[CrossRef]

C. Chen, Y. Hung, C. Chiang, and J. Wu, “Range data acquisition using color structured lighting and stereo vision,” Image Vis. Comput.15, 445–456 (1997).
[CrossRef]

Int. J. Comput. Vis.

M. Subbarao and G. Surya, “Depth from defocus - A spatial domain approach,” Int. J. Comput. Vis.13, 271–294 (1994).
[CrossRef]

M. Watanabe and S. Nayar, “Rational filters for passive depth from defocus,” Int. J. Comput. Vis.27, 203–225 (1998).
[CrossRef]

ISPRS J. Photogramm. Remote Sens.

E.P. Baltsavias, “Airborne laser scanning: basic relations and formulas,” ISPRS J. Photogramm. Remote Sens.54, 199 – 214 (1999).
[CrossRef]

J. Optoelectron. Adv. Mater.

D. Falie and L. C. Ciobotaru, “Modified time of flight camera 3D-images improving method,” J. Optoelectron. Adv. Mater.4, 136–140 (2010).

J. Vis. Comput. Anim.

R. Koch, M. Pollefeys, and L. Van Gool, “Realistic surface reconstruction of 3D scenes from uncalibrated image sequences,” J. Vis. Comput. Anim.11, 115–127 (2000).
[CrossRef]

Opt. Eng.

M. Magee, R. Weniger, and E. Franke, “Location of features of known height in the presence of reflective and refractive noise using a stereoscopic light-striping approach,” Opt. Eng.33, 1092–1098 (1994).
[CrossRef]

Z. Kiraly, G. Springer, and J. Van Dam, “Stereoscopic vision system,” Opt. Eng.45, 043006 (2006).
[CrossRef]

Opt. Express

Opt. Laser Eng.

L. Yanga, P. Zhang, S. Liu, P. R. Samala, M. Su, and H. Yokota, “Measurement of strain distributions in mouse femora with 3D-digital speckle pattern interferometry,” Opt. Laser Eng.45, 843–851 (2007).
[CrossRef]

Opt. Lett.

Real-Time Imag.

R. Garcia, J. Batlle, and J. Salvi, “A new approach to pose detection using a trinocular stereovision system,” Real-Time Imag.8, 73–93 (2002).
[CrossRef]

Signal Process.

F. Meyer, “Topographic distance and watershed lines,” Signal Process.38, 113–125 (1994).
[CrossRef]

Other

S. Alibhai and S. Zucker, “Contour-based correspondence for stereo,” in “Comp. Vis. Proceedings,”, D Vernon, ed.

T. G. Georgiev and A. Lumsdaine, “Resolution in plenoptic cameras,” Computational Optical Sensing and Imaging, (OSA, 2009).

T. G. Georgiev and A. Lumsdaine, “Superresolution with plenoptic 2.0 cameras,” Signal recovery and synthesis, (OSA, 2009).

T. G. Georgiev, A. Lumsdaine, and S. Goma, “High dynamic range image capture with plenoptic 2.0 camera,” Signal recovery and synthesis, (OSA, 2009).

K. Atanassov, V. Ramachandra, S. R. Goma, and M. Aleksic, “3D image processing architecture for camera phones,” Proc. SPIE7864, 786414 (2011).
[CrossRef]

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1

The projected pattern used to attain the results displayed later in this paper. The reason for this pattern’s particular shape is discussed in the text.

Fig. 2
Fig. 2

Figure 2(a) shows the astigmatic pattern projected on to a flat target. Figure 2(b) shows the background image of the flat target. These two images are captured so that they can be subtracted from each other and the projected pattern can be retrieved, as seen in Fig. 2(c).

Fig. 3
Fig. 3

Figure 3(a) shows the horizontal component of the wavelet transform performed on the subtracted pattern see in Fig. 2(c). Figure 3(b) shows the vertical component of the wavelet transform.

Fig. 4
Fig. 4

After segmentation ratio of X and Y wavelet coefficients at different depths from the camera. Red colormap values are closer to the camera, while blue colormap values are farther from the camera. Non-uniformities are discussed in the text.

Fig. 5
Fig. 5

Calibration data set using the astigmatic depth sensing method. Error bars are plus and minus one standard deviation of the average ratio of the wavelet coefficients in the X and Y directions for an entire image at a single depth.

Fig. 6
Fig. 6

Figure (a) shows the depth ratio for the each target projected onto the X–Z plane before calibration, where the X–Y plane is the pixel X and Y coordinates of a measured depth point, and the Z axis is the distance from the camera. Figure (b) shows the average depth ratio for all points, with the red error bars denoting plus and minus one standard deviation of the points at a particular depth. Figure (c) shows the post-calibration projection of the depth ratios for each flat target. Figure (d) shows the average depth ratio of these calibrated data points with the red error bars denote plus and minus one standard deviation of all the depth points at a particular depth. Note the significant decrease of the standard deviation of the depth values calculated for a flat surface, from approximately 0.13 before calibration to an average standard deviation of 0.027 after calibration, as well as the separability of the 3 inch shifts after calibration. This post-calibration standard deviation corresponds to 1.06 inch of depth.

Fig. 7
Fig. 7

The reconstructed depth map using the methods outlined in this paper.

Fig. 8
Fig. 8

(a) Shows the original 2D image captured by our camera. (b) Shows the measured depth mask. Darker blue colors represent distances close to the camera while red colors represent distances farther from the camera. Note that cylindrical objects are captured correctly with this technique, as well as the more specular cup being imaged correctly. (c) Is a zoomed view of the projected pattern incident upon nearer targets, while (d) is a zoomed view of the pattern incident upon farther targets. Note that the vertical and horizontal line contrast changes as distance from the camera increases.

Fig. 9
Fig. 9

The reconstructed stereo image, presented as an anaglyph. A 3D image can be seen using a red-cyan pair of glasses.

Tables (1)

Tables Icon

Table 1 Overview of the benefits and disadvantages of several optical 3D measuring techniques.

Metrics