Abstract

In this paper, we propose a method by means of light field imaging under structured illumination to deal with high dynamic range 3D imaging. Fringe patterns are projected onto a scene and modulated by the scene depth then a structured light field is detected using light field recording devices. The structured light field contains information about ray direction and phase-encoded depth, via which the scene depth can be estimated from different directions. The multidirectional depth estimation can achieve high dynamic 3D imaging effectively. We analyzed and derived the phase-depth mapping in the structured light field and then proposed a flexible ray-based calibration approach to determine the independent mapping coefficients for each ray. Experimental results demonstrated the validity of the proposed method to perform high-quality 3D imaging for highly and lowly reflective surfaces.

© 2016 Optical Society of America

Full Article  |  PDF Article
OSA Recommended Articles
Structured-light 3D surface imaging: a tutorial

Jason Geng
Adv. Opt. Photon. 3(2) 128-160 (2011)

Calibration and 3D reconstruction of underwater objects with non-single-view projection model by structured light stereo imaging

Yexin Wang, Shahriar Negahdaripour, and Murat D. Aykin
Appl. Opt. 55(24) 6564-6575 (2016)

Close-range photogrammetry with light field camera: from disparity map to absolute distance

Peng Yang, Zhaomin Wang, Yizhen Yan, Weijuan Qu, Hongying Zhao, Anand Asundi, and Lei Yan
Appl. Opt. 55(27) 7477-7486 (2016)

References

  • View by:
  • |
  • |
  • |

  1. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
    [Crossref]
  2. X. Su and W. Chen, “Reliability-guided phase unwrapping algorithm: a review,” Opt. Lasers Eng. 42(3), 245–261 (2004).
    [Crossref]
  3. G. H. Liu, X. Y. Liu, and Q. Y. Feng, “3D shape measurement of objects with high dynamic range of surface reflectivity,” Appl. Opt. 50(23), 4557–4565 (2011).
    [Crossref] [PubMed]
  4. R. Kowarschik, P. Kuhmstedt, J. Gerber, W. Schreiber, and G. Notni, “Adaptive optical three-dimensional measurement with structured light,” Opt. Eng. 39(1), 150–158 (2000).
    [Crossref]
  5. S. Zhang and S. Yau, “High dynamic range scanning technique,” Opt. Eng. 48(3), 033604 (2006).
  6. S. Fang, Y. Zhang, Q. Chen, C. Zuo, R. Li, and G. Shen, “General solution for high dynamic range three-dimensional shape measurement using the fringe projection technique,” Opt. Lasers Eng. 59, 56–71 (2014).
    [Crossref]
  7. C. Waddington and J. Kofman, “Saturation avoidance by adaptive fringe projection in phase-shifting 3D surfaceshape measurement,” in 2010 International Symposium on Optomechatronic Technologies, (IEEE, 2010), pp. 1–4.
  8. H. Jiang, H. Zhao, and X. Li, “High dynamic range fringe acquisition: A novel 3-D scanning technique for highreflective surfaces,” Opt. Lasers Eng. 50(10), 1484–1493 (2012).
    [Crossref]
  9. L. Wolff, “Using polarization to separate reflection components,” in Proceeding of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1989), pp. 363–369.
    [Crossref]
  10. S. Umeyama and G. Godin, “Separation of diffuse and specular components of surface reflection by use of polarization and statistical analysis of images,” IEEE Trans. Pattern Anal. Mach. Intell. 26(5), 639–647 (2004).
    [Crossref] [PubMed]
  11. Q. Hu, K. G. Harding, X. Du, and D. Hamilton, “Shiny parts measurement using color separation,” Proc. SPIE 6000, 60000D (2005).
    [Crossref]
  12. M. Levoy and P. Hanrahan, “Light field rending,” in Proceedings of ACM SIGGRAPH, (1996), pp. 31–42.
  13. A. Orth and K. B. Crozier, “Light field moment imaging,” Opt. Lett. 38(15), 2666–2668 (2013).
    [Crossref] [PubMed]
  14. C. Hahne, A. Aggoun, S. Haxha, V. Velisavljevic, and J. C. J. Fernández, “Light field geometry of a standard plenoptic camera,” Opt. Express 22(22), 26659–26673 (2014).
    [Crossref] [PubMed]
  15. E. Y. Lam, “Computational photography with plenoptic camera and light field capture: tutorial,” J. Opt. Soc. Am. A 32(11), 2021–2032 (2015).
    [Crossref] [PubMed]
  16. J. Liu, T. Xu, W. Yue, J. Sun, and G. Situ, “Light-field moment microscopy with noise reduction,” Opt. Express 23(22), 29154–29162 (2015).
    [Crossref] [PubMed]
  17. X. Lin, J. Wu, G. Zheng, and Q. Dai, “Camera array based light field microscopy,” Biomed. Opt. Express 6(9), 3179–3189 (2015).
    [Crossref] [PubMed]
  18. K. Atanassov, S. Goma, V. Ramachandra, and T. Georgiev, “Content-based depth estimation in focused plenoptic camera,” Proc. SPIE 7864, 78640G (2011).
    [Crossref]
  19. T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
    [Crossref] [PubMed]
  20. S. Wanner and B. Goldluecke, “Globally consistent depth labeling of 4D light fields,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 41–48.
    [Crossref]
  21. C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene Reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32(4), 73 (2013).
    [Crossref]
  22. C. Frese and I. Gheta, “Robust depth estimation by fusion of stereo and focus series acquired with a camera array,” in Proceedings of IEEE International Conference on Multisensor fusion and Intergration for Intelligent Systems (IEEE, 2006), pp. 243–248.
    [Crossref]
  23. M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 673–680.
    [Crossref]
  24. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Technical Report CTSR (2005), pp. 1–11.
  25. V. Srinivasan, H. C. Liu, and M. Halioua, “Automated phase-measuring profilometry of 3-D diffuse objects,” Appl. Opt. 23(18), 3105–3108 (1984).
    [Crossref] [PubMed]
  26. X. Peng, Z. Yang, and H. Niu, “Multi-resolution reconstruction of 3-D image with modified temporal unwrapping algorithm,” Opt. Commun. 224(1–3), 35–44 (2003).
    [Crossref]
  27. D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 1027–1034.

2015 (3)

2014 (2)

C. Hahne, A. Aggoun, S. Haxha, V. Velisavljevic, and J. C. J. Fernández, “Light field geometry of a standard plenoptic camera,” Opt. Express 22(22), 26659–26673 (2014).
[Crossref] [PubMed]

S. Fang, Y. Zhang, Q. Chen, C. Zuo, R. Li, and G. Shen, “General solution for high dynamic range three-dimensional shape measurement using the fringe projection technique,” Opt. Lasers Eng. 59, 56–71 (2014).
[Crossref]

2013 (2)

A. Orth and K. B. Crozier, “Light field moment imaging,” Opt. Lett. 38(15), 2666–2668 (2013).
[Crossref] [PubMed]

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene Reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32(4), 73 (2013).
[Crossref]

2012 (2)

H. Jiang, H. Zhao, and X. Li, “High dynamic range fringe acquisition: A novel 3-D scanning technique for highreflective surfaces,” Opt. Lasers Eng. 50(10), 1484–1493 (2012).
[Crossref]

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

2011 (2)

G. H. Liu, X. Y. Liu, and Q. Y. Feng, “3D shape measurement of objects with high dynamic range of surface reflectivity,” Appl. Opt. 50(23), 4557–4565 (2011).
[Crossref] [PubMed]

K. Atanassov, S. Goma, V. Ramachandra, and T. Georgiev, “Content-based depth estimation in focused plenoptic camera,” Proc. SPIE 7864, 78640G (2011).
[Crossref]

2010 (1)

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

2006 (1)

S. Zhang and S. Yau, “High dynamic range scanning technique,” Opt. Eng. 48(3), 033604 (2006).

2005 (1)

Q. Hu, K. G. Harding, X. Du, and D. Hamilton, “Shiny parts measurement using color separation,” Proc. SPIE 6000, 60000D (2005).
[Crossref]

2004 (2)

S. Umeyama and G. Godin, “Separation of diffuse and specular components of surface reflection by use of polarization and statistical analysis of images,” IEEE Trans. Pattern Anal. Mach. Intell. 26(5), 639–647 (2004).
[Crossref] [PubMed]

X. Su and W. Chen, “Reliability-guided phase unwrapping algorithm: a review,” Opt. Lasers Eng. 42(3), 245–261 (2004).
[Crossref]

2003 (1)

X. Peng, Z. Yang, and H. Niu, “Multi-resolution reconstruction of 3-D image with modified temporal unwrapping algorithm,” Opt. Commun. 224(1–3), 35–44 (2003).
[Crossref]

2000 (1)

R. Kowarschik, P. Kuhmstedt, J. Gerber, W. Schreiber, and G. Notni, “Adaptive optical three-dimensional measurement with structured light,” Opt. Eng. 39(1), 150–158 (2000).
[Crossref]

1984 (1)

Aggoun, A.

Atanassov, K.

K. Atanassov, S. Goma, V. Ramachandra, and T. Georgiev, “Content-based depth estimation in focused plenoptic camera,” Proc. SPIE 7864, 78640G (2011).
[Crossref]

Bishop, T. E.

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

Chen, Q.

S. Fang, Y. Zhang, Q. Chen, C. Zuo, R. Li, and G. Shen, “General solution for high dynamic range three-dimensional shape measurement using the fringe projection technique,” Opt. Lasers Eng. 59, 56–71 (2014).
[Crossref]

Chen, W.

X. Su and W. Chen, “Reliability-guided phase unwrapping algorithm: a review,” Opt. Lasers Eng. 42(3), 245–261 (2004).
[Crossref]

Crozier, K. B.

Dai, Q.

Dansereau, D. G.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 1027–1034.

Du, X.

Q. Hu, K. G. Harding, X. Du, and D. Hamilton, “Shiny parts measurement using color separation,” Proc. SPIE 6000, 60000D (2005).
[Crossref]

Fang, S.

S. Fang, Y. Zhang, Q. Chen, C. Zuo, R. Li, and G. Shen, “General solution for high dynamic range three-dimensional shape measurement using the fringe projection technique,” Opt. Lasers Eng. 59, 56–71 (2014).
[Crossref]

Favaro, P.

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

Feng, Q. Y.

Fernández, J. C. J.

Frese, C.

C. Frese and I. Gheta, “Robust depth estimation by fusion of stereo and focus series acquired with a camera array,” in Proceedings of IEEE International Conference on Multisensor fusion and Intergration for Intelligent Systems (IEEE, 2006), pp. 243–248.
[Crossref]

Georgiev, T.

K. Atanassov, S. Goma, V. Ramachandra, and T. Georgiev, “Content-based depth estimation in focused plenoptic camera,” Proc. SPIE 7864, 78640G (2011).
[Crossref]

Gerber, J.

R. Kowarschik, P. Kuhmstedt, J. Gerber, W. Schreiber, and G. Notni, “Adaptive optical three-dimensional measurement with structured light,” Opt. Eng. 39(1), 150–158 (2000).
[Crossref]

Gheta, I.

C. Frese and I. Gheta, “Robust depth estimation by fusion of stereo and focus series acquired with a camera array,” in Proceedings of IEEE International Conference on Multisensor fusion and Intergration for Intelligent Systems (IEEE, 2006), pp. 243–248.
[Crossref]

Godin, G.

S. Umeyama and G. Godin, “Separation of diffuse and specular components of surface reflection by use of polarization and statistical analysis of images,” IEEE Trans. Pattern Anal. Mach. Intell. 26(5), 639–647 (2004).
[Crossref] [PubMed]

Goldluecke, B.

S. Wanner and B. Goldluecke, “Globally consistent depth labeling of 4D light fields,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 41–48.
[Crossref]

Goma, S.

K. Atanassov, S. Goma, V. Ramachandra, and T. Georgiev, “Content-based depth estimation in focused plenoptic camera,” Proc. SPIE 7864, 78640G (2011).
[Crossref]

Gorthi, S. S.

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

Gross, M.

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene Reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32(4), 73 (2013).
[Crossref]

Hadap, S.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 673–680.
[Crossref]

Hahne, C.

Halioua, M.

Hamilton, D.

Q. Hu, K. G. Harding, X. Du, and D. Hamilton, “Shiny parts measurement using color separation,” Proc. SPIE 6000, 60000D (2005).
[Crossref]

Hanrahan, P.

M. Levoy and P. Hanrahan, “Light field rending,” in Proceedings of ACM SIGGRAPH, (1996), pp. 31–42.

Harding, K. G.

Q. Hu, K. G. Harding, X. Du, and D. Hamilton, “Shiny parts measurement using color separation,” Proc. SPIE 6000, 60000D (2005).
[Crossref]

Haxha, S.

Hu, Q.

Q. Hu, K. G. Harding, X. Du, and D. Hamilton, “Shiny parts measurement using color separation,” Proc. SPIE 6000, 60000D (2005).
[Crossref]

Jiang, H.

H. Jiang, H. Zhao, and X. Li, “High dynamic range fringe acquisition: A novel 3-D scanning technique for highreflective surfaces,” Opt. Lasers Eng. 50(10), 1484–1493 (2012).
[Crossref]

Kim, C.

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene Reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32(4), 73 (2013).
[Crossref]

Kofman, J.

C. Waddington and J. Kofman, “Saturation avoidance by adaptive fringe projection in phase-shifting 3D surfaceshape measurement,” in 2010 International Symposium on Optomechatronic Technologies, (IEEE, 2010), pp. 1–4.

Kowarschik, R.

R. Kowarschik, P. Kuhmstedt, J. Gerber, W. Schreiber, and G. Notni, “Adaptive optical three-dimensional measurement with structured light,” Opt. Eng. 39(1), 150–158 (2000).
[Crossref]

Kuhmstedt, P.

R. Kowarschik, P. Kuhmstedt, J. Gerber, W. Schreiber, and G. Notni, “Adaptive optical three-dimensional measurement with structured light,” Opt. Eng. 39(1), 150–158 (2000).
[Crossref]

Lam, E. Y.

Levoy, M.

M. Levoy and P. Hanrahan, “Light field rending,” in Proceedings of ACM SIGGRAPH, (1996), pp. 31–42.

Li, R.

S. Fang, Y. Zhang, Q. Chen, C. Zuo, R. Li, and G. Shen, “General solution for high dynamic range three-dimensional shape measurement using the fringe projection technique,” Opt. Lasers Eng. 59, 56–71 (2014).
[Crossref]

Li, X.

H. Jiang, H. Zhao, and X. Li, “High dynamic range fringe acquisition: A novel 3-D scanning technique for highreflective surfaces,” Opt. Lasers Eng. 50(10), 1484–1493 (2012).
[Crossref]

Lin, X.

Liu, G. H.

Liu, H. C.

Liu, J.

Liu, X. Y.

Malik, J.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 673–680.
[Crossref]

Niu, H.

X. Peng, Z. Yang, and H. Niu, “Multi-resolution reconstruction of 3-D image with modified temporal unwrapping algorithm,” Opt. Commun. 224(1–3), 35–44 (2003).
[Crossref]

Notni, G.

R. Kowarschik, P. Kuhmstedt, J. Gerber, W. Schreiber, and G. Notni, “Adaptive optical three-dimensional measurement with structured light,” Opt. Eng. 39(1), 150–158 (2000).
[Crossref]

Orth, A.

Peng, X.

X. Peng, Z. Yang, and H. Niu, “Multi-resolution reconstruction of 3-D image with modified temporal unwrapping algorithm,” Opt. Commun. 224(1–3), 35–44 (2003).
[Crossref]

Pizarro, O.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 1027–1034.

Pritch, Y.

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene Reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32(4), 73 (2013).
[Crossref]

Ramachandra, V.

K. Atanassov, S. Goma, V. Ramachandra, and T. Georgiev, “Content-based depth estimation in focused plenoptic camera,” Proc. SPIE 7864, 78640G (2011).
[Crossref]

Ramamoorthi, R.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 673–680.
[Crossref]

Rastogi, P.

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

Schreiber, W.

R. Kowarschik, P. Kuhmstedt, J. Gerber, W. Schreiber, and G. Notni, “Adaptive optical three-dimensional measurement with structured light,” Opt. Eng. 39(1), 150–158 (2000).
[Crossref]

Shen, G.

S. Fang, Y. Zhang, Q. Chen, C. Zuo, R. Li, and G. Shen, “General solution for high dynamic range three-dimensional shape measurement using the fringe projection technique,” Opt. Lasers Eng. 59, 56–71 (2014).
[Crossref]

Situ, G.

Sorkine-Hornung, A.

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene Reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32(4), 73 (2013).
[Crossref]

Srinivasan, V.

Su, X.

X. Su and W. Chen, “Reliability-guided phase unwrapping algorithm: a review,” Opt. Lasers Eng. 42(3), 245–261 (2004).
[Crossref]

Sun, J.

Tao, M. W.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 673–680.
[Crossref]

Umeyama, S.

S. Umeyama and G. Godin, “Separation of diffuse and specular components of surface reflection by use of polarization and statistical analysis of images,” IEEE Trans. Pattern Anal. Mach. Intell. 26(5), 639–647 (2004).
[Crossref] [PubMed]

Velisavljevic, V.

Waddington, C.

C. Waddington and J. Kofman, “Saturation avoidance by adaptive fringe projection in phase-shifting 3D surfaceshape measurement,” in 2010 International Symposium on Optomechatronic Technologies, (IEEE, 2010), pp. 1–4.

Wanner, S.

S. Wanner and B. Goldluecke, “Globally consistent depth labeling of 4D light fields,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 41–48.
[Crossref]

Williams, S. B.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 1027–1034.

Wolff, L.

L. Wolff, “Using polarization to separate reflection components,” in Proceeding of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1989), pp. 363–369.
[Crossref]

Wu, J.

Xu, T.

Yang, Z.

X. Peng, Z. Yang, and H. Niu, “Multi-resolution reconstruction of 3-D image with modified temporal unwrapping algorithm,” Opt. Commun. 224(1–3), 35–44 (2003).
[Crossref]

Yau, S.

S. Zhang and S. Yau, “High dynamic range scanning technique,” Opt. Eng. 48(3), 033604 (2006).

Yue, W.

Zhang, S.

S. Zhang and S. Yau, “High dynamic range scanning technique,” Opt. Eng. 48(3), 033604 (2006).

Zhang, Y.

S. Fang, Y. Zhang, Q. Chen, C. Zuo, R. Li, and G. Shen, “General solution for high dynamic range three-dimensional shape measurement using the fringe projection technique,” Opt. Lasers Eng. 59, 56–71 (2014).
[Crossref]

Zhao, H.

H. Jiang, H. Zhao, and X. Li, “High dynamic range fringe acquisition: A novel 3-D scanning technique for highreflective surfaces,” Opt. Lasers Eng. 50(10), 1484–1493 (2012).
[Crossref]

Zheng, G.

Zimmer, H.

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene Reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32(4), 73 (2013).
[Crossref]

Zuo, C.

S. Fang, Y. Zhang, Q. Chen, C. Zuo, R. Li, and G. Shen, “General solution for high dynamic range three-dimensional shape measurement using the fringe projection technique,” Opt. Lasers Eng. 59, 56–71 (2014).
[Crossref]

ACM Trans. Graph. (1)

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene Reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32(4), 73 (2013).
[Crossref]

Appl. Opt. (2)

Biomed. Opt. Express (1)

IEEE Trans. Pattern Anal. Mach. Intell. (2)

S. Umeyama and G. Godin, “Separation of diffuse and specular components of surface reflection by use of polarization and statistical analysis of images,” IEEE Trans. Pattern Anal. Mach. Intell. 26(5), 639–647 (2004).
[Crossref] [PubMed]

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

J. Opt. Soc. Am. A (1)

Opt. Commun. (1)

X. Peng, Z. Yang, and H. Niu, “Multi-resolution reconstruction of 3-D image with modified temporal unwrapping algorithm,” Opt. Commun. 224(1–3), 35–44 (2003).
[Crossref]

Opt. Eng. (2)

R. Kowarschik, P. Kuhmstedt, J. Gerber, W. Schreiber, and G. Notni, “Adaptive optical three-dimensional measurement with structured light,” Opt. Eng. 39(1), 150–158 (2000).
[Crossref]

S. Zhang and S. Yau, “High dynamic range scanning technique,” Opt. Eng. 48(3), 033604 (2006).

Opt. Express (2)

Opt. Lasers Eng. (4)

S. Fang, Y. Zhang, Q. Chen, C. Zuo, R. Li, and G. Shen, “General solution for high dynamic range three-dimensional shape measurement using the fringe projection technique,” Opt. Lasers Eng. 59, 56–71 (2014).
[Crossref]

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

X. Su and W. Chen, “Reliability-guided phase unwrapping algorithm: a review,” Opt. Lasers Eng. 42(3), 245–261 (2004).
[Crossref]

H. Jiang, H. Zhao, and X. Li, “High dynamic range fringe acquisition: A novel 3-D scanning technique for highreflective surfaces,” Opt. Lasers Eng. 50(10), 1484–1493 (2012).
[Crossref]

Opt. Lett. (1)

Proc. SPIE (2)

Q. Hu, K. G. Harding, X. Du, and D. Hamilton, “Shiny parts measurement using color separation,” Proc. SPIE 6000, 60000D (2005).
[Crossref]

K. Atanassov, S. Goma, V. Ramachandra, and T. Georgiev, “Content-based depth estimation in focused plenoptic camera,” Proc. SPIE 7864, 78640G (2011).
[Crossref]

Other (8)

M. Levoy and P. Hanrahan, “Light field rending,” in Proceedings of ACM SIGGRAPH, (1996), pp. 31–42.

L. Wolff, “Using polarization to separate reflection components,” in Proceeding of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1989), pp. 363–369.
[Crossref]

C. Waddington and J. Kofman, “Saturation avoidance by adaptive fringe projection in phase-shifting 3D surfaceshape measurement,” in 2010 International Symposium on Optomechatronic Technologies, (IEEE, 2010), pp. 1–4.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 1027–1034.

S. Wanner and B. Goldluecke, “Globally consistent depth labeling of 4D light fields,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 41–48.
[Crossref]

C. Frese and I. Gheta, “Robust depth estimation by fusion of stereo and focus series acquired with a camera array,” in Proceedings of IEEE International Conference on Multisensor fusion and Intergration for Intelligent Systems (IEEE, 2006), pp. 243–248.
[Crossref]

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 673–680.
[Crossref]

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Technical Report CTSR (2005), pp. 1–11.

Supplementary Material (1)

NameDescription
» Visualization 1: MOV (2096 KB)      A brief presentation of the procedure and result of the proposed method.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1

(a) Image formation of an SLF-3DI system consisting of a projector and a plenoptic camera; (b) the 2D diagram of an SLF-3DI system within a coordinate system.

Fig. 2
Fig. 2

Flexible ray-based phase-depth mapping calibration schematic.

Fig. 3
Fig. 3

Experimental setup of the SLF-3DI system.

Fig. 4
Fig. 4

The measured data and the corresponding characteristic curves of depth and phase difference: (a) in measured range of 100mm; (b) in measured range of 40mm.

Fig. 5
Fig. 5

3D reconstruction of a plaster model from eight directions.

Fig. 6
Fig. 6

(a) The focused view under fringe projection; (b) the multidirectional modulation intensity in the SLF and the detailed segments covered by two microlenses; (c) the modulation-intensity image in the focused view; (d) the maximum modulation-intensity image in the SLF; (e-f) the statistical histograms of (c) and (d) respectively; (g-h) the modulation masks of (c) and (d) respectively.

Fig. 7
Fig. 7

Scene reconstruction: (a-b) depth maps from two specific directions; (c) depth map via maximum modulation selection; (d) the corresponding 3D view of (c).

Tables (1)

Tables Icon

Table 1 Fitting Coefficients, MAX (mm) and RMS (mm) of Calibration Results

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

I( X )=a+bcos( 2πfX )
L( u,s )=R( u,s )( a+bcosϕ )= R a ( u,s )+ R b ( u,s )cosϕ
Δϕ= ϕ obj ϕ ref =2πf( X E X A )
d( u,s;Δϕ )= Z P Δϕ 2πf[ Z C ( s )+ Z P Z C ( s ) Z B ( u ) ( X B ( u ) X C ( s ) )+ X C ( s )+ X P ]+Δϕ .
d us ( Δϕ )= m us Δϕ n us +Δϕ , m us = Z P , n us =2πf[ Z C + Z P Z C Z B ( X B X C )+ X C + X P ].
arg τ min i ( d i|us n us + d i|us Δ ϕ i|us m us Δ ϕ i|us ) 2 ,τ=( m us , n us )
d dΔϕ ( d us )= m us n us ( n us +Δϕ ) 2 .
d dΔϕ ( d us ) m us n us n us 2 = m us n us .
d us ( Δϕ ) m us n us Δϕ= k us Δϕ.
X B X X B X C = Z B Z Z B Z C .
X A = Z B X C Z C X B Z B Z C .
X P X X P X D = Z P Z Z P d .
X E = Z P X D d X P Z P d
X D = X B X C Z B Z C d+ Z B X C Z C X B Z B Z C .
d= Z P Δϕ 2πf[ Z C + Z P Z C Z B ( X B X C )+ X C + X P ]+Δϕ .

Metrics