Abstract

The state-of-the-art 3D shape measurement system has rather shallow working volume due to the limited depth-of-field (DOF) of conventional lens. In this paper, we propose to use the electrically tunable lens to substantially enlarge the DOF. Specifically, we capture always in-focus phase-shifted fringe patterns by precisely synchronizing the tunable lens attached to the camera with the image acquisition and the pattern projection; we develop a phase unwrapping framework that fully utilizes the geometric constraint from the camera focal length setting; and we pre-calibrate the system under different focal distance to reconstruct 3D shape from unwrapped phase map. To validate the proposed idea, we developed a prototype system that can perform high-quality measurement for the depth range of approximately 1,000 mm (400 mm – 1400 mm) with the measurement error of 0.05%. Furthermore, we demonstrated that such a technique can be used for real-time 3D shape measurement by experimentally measuring moving objects.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
High-speed 3D shape measurement using the optimized composite fringe patterns and stereo-assisted structured light system

Wei Yin, Shijie Feng, Tianyang Tao, Lei Huang, Maciej Trusiak, Qian Chen, and Chao Zuo
Opt. Express 27(3) 2411-2431 (2019)

An endoscopic system adopting a liquid crystal lens with an electrically tunable depth-of-field

Hung-Shan Chen and Yi-Hsin Lin
Opt. Express 21(15) 18079-18088 (2013)

References

  • View by:
  • |
  • |
  • |

  1. S. Zhang, High-Speed 3D imaging with digital fringe projection techniques (CRC University, 2016).
  2. M. Gupta and S. K. Nayar, “Micro phase shifting,” in IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2012), pp. 813–820.
  3. Y. Wang and S. Zhang, “Three-dimensional shape measurement with binary dithered patterns,” Appl. Opt. 51(27), 6631–6636 (2012).
    [Crossref]
  4. J. Sun, C. Zuo, S. Feng, S. Yu, Y. Zhang, and Q. Chen, “Improved intensity-optimized dithering technique for 3d shape measurement,” Opt. Lasers Eng. 66, 158–164 (2015).
    [Crossref]
  5. L. Zhang and S. Nayar, “Projection defocus analysis for scene capture and image display,” ACM Trans. Graph. 25(3), 907–915 (2006).
    [Crossref]
  6. R. R. Garcia and A. Zakhor, “Selection of temporally dithered codes for increasing virtual depth of field in structured light systems,” in IEEE Conference on Computer Vision and Pattern Recognition-Workshops, (IEEE, 2010), pp. 88–95.
  7. Y. Zhang, Z. Xiong, P. Cong, and F. Wu, “Robust depth sensing with adaptive structured light illumination,” J. Vis. Commun. Image Represent. 25(4), 649–658 (2014).
    [Crossref]
  8. G. Rao, L. Song, S. Zhang, X. Yang, K. Chen, and J. Xu, “Depth-driven variable-frequency sinusoidal fringe pattern for accuracy improvement in fringe projection profilometry,” Opt. Express 26(16), 19986–20008 (2018).
    [Crossref]
  9. S. Achar and S. G. Narasimhan, “Multi focus structured light for recovering scene shape and global illumination,” in European Conference on Computer Vision, (Springer, 2014), pp. 205–219.
  10. H. Kawasaki, S. Ono, Y. Horita, Y. Shiba, R. Furukawa, and S. Hiura, “Active one-shot scan for wide depth range using a light field projector based on coded aperture,” in Proceedings of the IEEE International Conference on Computer Vision, (IEEE, 2015), pp. 3568–3576.
  11. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Comput. Sci. Tech. Rep. 2, 1–11 (2005).
  12. A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26(3), 70 (2007).
    [Crossref]
  13. J. R. Alonso, A. Fernández, G. A. Ayubi, and J. A. Ferrari, “All-in-focus image reconstruction under severe defocus,” Opt. Lett. 40(8), 1671–1674 (2015).
    [Crossref]
  14. G. Wang, W. Li, X. Yin, and H. Yang, “All-in-focus with directional-max-gradient flow and labeled iterative depth propagation,” Pattern Recognit. 77, 173–187 (2018).
    [Crossref]
  15. S.-H. Lai, C.-W. Fu, and S. Chang, “A generalized depth estimation algorithm with a single image,” IEEE Trans. Pattern Anal. Machine Intell. 14(4), 405–411 (1992).
    [Crossref]
  16. M. Subbarao and G. Surya, “Depth from defocus: a spatial domain approach,” Int. J. Comput. Vis. 13(3), 271–294 (1994).
    [Crossref]
  17. V. Aslantas and D. Pham, “Depth from automatic defocusing,” Opt. Express 15(3), 1011–1023 (2007).
    [Crossref]
  18. M. Blum, M. Büeler, C. Grätzel, and M. Aschwanden, “Compact optical design solutions using focus tunable lenses,” in Optical Design and Engineering IV, vol. 8167 (International Society for Optics and Photonics, 2011), p. 81670W.
  19. D. Malacara, Optical shop testing (John Wiley & Sons, 2007).
  20. H.-C. Lin, M.-S. Chen, and Y.-H. Lin, “A review of electrically tunable focusing liquid crystal lenses,” Transactions on Electr. Electron. Mater. 12(6), 234–240 (2011).
    [Crossref]
  21. B. Berge and J. Peseux, “Variable focal lens controlled by an external voltage: An application of electrowetting,” Eur. Phys. J. E: Soft Matter Biol. Phys. 3(2), 159–163 (2000).
    [Crossref]
  22. R. Shamai, D. Andelman, B. Berge, and R. Hayes, “Water, electricity, and between$\cdots$⋯ on electrowetting and its applications,” Soft Matter 4(1), 38–45 (2008).
    [Crossref]
  23. H. Ren and S.-T. Wu, Introduction to adaptive lenses, vol. 75 (John Wiley & Sons, 2012).
  24. Y. An, J.-S. Hyun, and S. Zhang, “Pixel-wise absolute phase unwrapping using geometric constraints of structured light system,” Opt. Express 24(16), 18445–18459 (2016).
    [Crossref]
  25. J.-S. Hyun and S. Zhang, “Enhanced two-frequency phase-shifting method,” Appl. Opt. 55(16), 4395–4401 (2016).
    [Crossref]
  26. B. Li, N. Karpinsky, and S. Zhang, “Novel calibration method for structured-light system with an out-of-focus projector,” Appl. Opt. 53(16), 3415–3426 (2014).
    [Crossref]
  27. Optotune, “EL-10-30 datasheet,” https://www.optotune.com/images/products/Optotune%20EL-10-30.pdf// .
  28. F. Crete, T. Dolmiere, P. Ladret, and M. Nicolas, “The blur effect: perception and estimation with a new no-reference perceptual blur metric,” in Human Vision and Electronic Imaging (International Society for Optics and Photonics, 2007), p. 64920I.

2018 (2)

G. Wang, W. Li, X. Yin, and H. Yang, “All-in-focus with directional-max-gradient flow and labeled iterative depth propagation,” Pattern Recognit. 77, 173–187 (2018).
[Crossref]

G. Rao, L. Song, S. Zhang, X. Yang, K. Chen, and J. Xu, “Depth-driven variable-frequency sinusoidal fringe pattern for accuracy improvement in fringe projection profilometry,” Opt. Express 26(16), 19986–20008 (2018).
[Crossref]

2016 (2)

2015 (2)

J. Sun, C. Zuo, S. Feng, S. Yu, Y. Zhang, and Q. Chen, “Improved intensity-optimized dithering technique for 3d shape measurement,” Opt. Lasers Eng. 66, 158–164 (2015).
[Crossref]

J. R. Alonso, A. Fernández, G. A. Ayubi, and J. A. Ferrari, “All-in-focus image reconstruction under severe defocus,” Opt. Lett. 40(8), 1671–1674 (2015).
[Crossref]

2014 (2)

Y. Zhang, Z. Xiong, P. Cong, and F. Wu, “Robust depth sensing with adaptive structured light illumination,” J. Vis. Commun. Image Represent. 25(4), 649–658 (2014).
[Crossref]

B. Li, N. Karpinsky, and S. Zhang, “Novel calibration method for structured-light system with an out-of-focus projector,” Appl. Opt. 53(16), 3415–3426 (2014).
[Crossref]

2012 (1)

2011 (1)

H.-C. Lin, M.-S. Chen, and Y.-H. Lin, “A review of electrically tunable focusing liquid crystal lenses,” Transactions on Electr. Electron. Mater. 12(6), 234–240 (2011).
[Crossref]

2008 (1)

R. Shamai, D. Andelman, B. Berge, and R. Hayes, “Water, electricity, and between$\cdots$⋯ on electrowetting and its applications,” Soft Matter 4(1), 38–45 (2008).
[Crossref]

2007 (2)

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26(3), 70 (2007).
[Crossref]

V. Aslantas and D. Pham, “Depth from automatic defocusing,” Opt. Express 15(3), 1011–1023 (2007).
[Crossref]

2006 (1)

L. Zhang and S. Nayar, “Projection defocus analysis for scene capture and image display,” ACM Trans. Graph. 25(3), 907–915 (2006).
[Crossref]

2005 (1)

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Comput. Sci. Tech. Rep. 2, 1–11 (2005).

2000 (1)

B. Berge and J. Peseux, “Variable focal lens controlled by an external voltage: An application of electrowetting,” Eur. Phys. J. E: Soft Matter Biol. Phys. 3(2), 159–163 (2000).
[Crossref]

1994 (1)

M. Subbarao and G. Surya, “Depth from defocus: a spatial domain approach,” Int. J. Comput. Vis. 13(3), 271–294 (1994).
[Crossref]

1992 (1)

S.-H. Lai, C.-W. Fu, and S. Chang, “A generalized depth estimation algorithm with a single image,” IEEE Trans. Pattern Anal. Machine Intell. 14(4), 405–411 (1992).
[Crossref]

Achar, S.

S. Achar and S. G. Narasimhan, “Multi focus structured light for recovering scene shape and global illumination,” in European Conference on Computer Vision, (Springer, 2014), pp. 205–219.

Alonso, J. R.

An, Y.

Andelman, D.

R. Shamai, D. Andelman, B. Berge, and R. Hayes, “Water, electricity, and between$\cdots$⋯ on electrowetting and its applications,” Soft Matter 4(1), 38–45 (2008).
[Crossref]

Aschwanden, M.

M. Blum, M. Büeler, C. Grätzel, and M. Aschwanden, “Compact optical design solutions using focus tunable lenses,” in Optical Design and Engineering IV, vol. 8167 (International Society for Optics and Photonics, 2011), p. 81670W.

Aslantas, V.

Ayubi, G. A.

Berge, B.

R. Shamai, D. Andelman, B. Berge, and R. Hayes, “Water, electricity, and between$\cdots$⋯ on electrowetting and its applications,” Soft Matter 4(1), 38–45 (2008).
[Crossref]

B. Berge and J. Peseux, “Variable focal lens controlled by an external voltage: An application of electrowetting,” Eur. Phys. J. E: Soft Matter Biol. Phys. 3(2), 159–163 (2000).
[Crossref]

Blum, M.

M. Blum, M. Büeler, C. Grätzel, and M. Aschwanden, “Compact optical design solutions using focus tunable lenses,” in Optical Design and Engineering IV, vol. 8167 (International Society for Optics and Photonics, 2011), p. 81670W.

Brédif, M.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Comput. Sci. Tech. Rep. 2, 1–11 (2005).

Büeler, M.

M. Blum, M. Büeler, C. Grätzel, and M. Aschwanden, “Compact optical design solutions using focus tunable lenses,” in Optical Design and Engineering IV, vol. 8167 (International Society for Optics and Photonics, 2011), p. 81670W.

Chang, S.

S.-H. Lai, C.-W. Fu, and S. Chang, “A generalized depth estimation algorithm with a single image,” IEEE Trans. Pattern Anal. Machine Intell. 14(4), 405–411 (1992).
[Crossref]

Chen, K.

Chen, M.-S.

H.-C. Lin, M.-S. Chen, and Y.-H. Lin, “A review of electrically tunable focusing liquid crystal lenses,” Transactions on Electr. Electron. Mater. 12(6), 234–240 (2011).
[Crossref]

Chen, Q.

J. Sun, C. Zuo, S. Feng, S. Yu, Y. Zhang, and Q. Chen, “Improved intensity-optimized dithering technique for 3d shape measurement,” Opt. Lasers Eng. 66, 158–164 (2015).
[Crossref]

Cong, P.

Y. Zhang, Z. Xiong, P. Cong, and F. Wu, “Robust depth sensing with adaptive structured light illumination,” J. Vis. Commun. Image Represent. 25(4), 649–658 (2014).
[Crossref]

Crete, F.

F. Crete, T. Dolmiere, P. Ladret, and M. Nicolas, “The blur effect: perception and estimation with a new no-reference perceptual blur metric,” in Human Vision and Electronic Imaging (International Society for Optics and Photonics, 2007), p. 64920I.

Dolmiere, T.

F. Crete, T. Dolmiere, P. Ladret, and M. Nicolas, “The blur effect: perception and estimation with a new no-reference perceptual blur metric,” in Human Vision and Electronic Imaging (International Society for Optics and Photonics, 2007), p. 64920I.

Durand, F.

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26(3), 70 (2007).
[Crossref]

Duval, G.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Comput. Sci. Tech. Rep. 2, 1–11 (2005).

Feng, S.

J. Sun, C. Zuo, S. Feng, S. Yu, Y. Zhang, and Q. Chen, “Improved intensity-optimized dithering technique for 3d shape measurement,” Opt. Lasers Eng. 66, 158–164 (2015).
[Crossref]

Fergus, R.

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26(3), 70 (2007).
[Crossref]

Fernández, A.

Ferrari, J. A.

Freeman, W. T.

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26(3), 70 (2007).
[Crossref]

Fu, C.-W.

S.-H. Lai, C.-W. Fu, and S. Chang, “A generalized depth estimation algorithm with a single image,” IEEE Trans. Pattern Anal. Machine Intell. 14(4), 405–411 (1992).
[Crossref]

Furukawa, R.

H. Kawasaki, S. Ono, Y. Horita, Y. Shiba, R. Furukawa, and S. Hiura, “Active one-shot scan for wide depth range using a light field projector based on coded aperture,” in Proceedings of the IEEE International Conference on Computer Vision, (IEEE, 2015), pp. 3568–3576.

Garcia, R. R.

R. R. Garcia and A. Zakhor, “Selection of temporally dithered codes for increasing virtual depth of field in structured light systems,” in IEEE Conference on Computer Vision and Pattern Recognition-Workshops, (IEEE, 2010), pp. 88–95.

Grätzel, C.

M. Blum, M. Büeler, C. Grätzel, and M. Aschwanden, “Compact optical design solutions using focus tunable lenses,” in Optical Design and Engineering IV, vol. 8167 (International Society for Optics and Photonics, 2011), p. 81670W.

Gupta, M.

M. Gupta and S. K. Nayar, “Micro phase shifting,” in IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2012), pp. 813–820.

Hanrahan, P.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Comput. Sci. Tech. Rep. 2, 1–11 (2005).

Hayes, R.

R. Shamai, D. Andelman, B. Berge, and R. Hayes, “Water, electricity, and between$\cdots$⋯ on electrowetting and its applications,” Soft Matter 4(1), 38–45 (2008).
[Crossref]

Hiura, S.

H. Kawasaki, S. Ono, Y. Horita, Y. Shiba, R. Furukawa, and S. Hiura, “Active one-shot scan for wide depth range using a light field projector based on coded aperture,” in Proceedings of the IEEE International Conference on Computer Vision, (IEEE, 2015), pp. 3568–3576.

Horita, Y.

H. Kawasaki, S. Ono, Y. Horita, Y. Shiba, R. Furukawa, and S. Hiura, “Active one-shot scan for wide depth range using a light field projector based on coded aperture,” in Proceedings of the IEEE International Conference on Computer Vision, (IEEE, 2015), pp. 3568–3576.

Horowitz, M.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Comput. Sci. Tech. Rep. 2, 1–11 (2005).

Hyun, J.-S.

Karpinsky, N.

Kawasaki, H.

H. Kawasaki, S. Ono, Y. Horita, Y. Shiba, R. Furukawa, and S. Hiura, “Active one-shot scan for wide depth range using a light field projector based on coded aperture,” in Proceedings of the IEEE International Conference on Computer Vision, (IEEE, 2015), pp. 3568–3576.

Ladret, P.

F. Crete, T. Dolmiere, P. Ladret, and M. Nicolas, “The blur effect: perception and estimation with a new no-reference perceptual blur metric,” in Human Vision and Electronic Imaging (International Society for Optics and Photonics, 2007), p. 64920I.

Lai, S.-H.

S.-H. Lai, C.-W. Fu, and S. Chang, “A generalized depth estimation algorithm with a single image,” IEEE Trans. Pattern Anal. Machine Intell. 14(4), 405–411 (1992).
[Crossref]

Levin, A.

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26(3), 70 (2007).
[Crossref]

Levoy, M.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Comput. Sci. Tech. Rep. 2, 1–11 (2005).

Li, B.

Li, W.

G. Wang, W. Li, X. Yin, and H. Yang, “All-in-focus with directional-max-gradient flow and labeled iterative depth propagation,” Pattern Recognit. 77, 173–187 (2018).
[Crossref]

Lin, H.-C.

H.-C. Lin, M.-S. Chen, and Y.-H. Lin, “A review of electrically tunable focusing liquid crystal lenses,” Transactions on Electr. Electron. Mater. 12(6), 234–240 (2011).
[Crossref]

Lin, Y.-H.

H.-C. Lin, M.-S. Chen, and Y.-H. Lin, “A review of electrically tunable focusing liquid crystal lenses,” Transactions on Electr. Electron. Mater. 12(6), 234–240 (2011).
[Crossref]

Malacara, D.

D. Malacara, Optical shop testing (John Wiley & Sons, 2007).

Narasimhan, S. G.

S. Achar and S. G. Narasimhan, “Multi focus structured light for recovering scene shape and global illumination,” in European Conference on Computer Vision, (Springer, 2014), pp. 205–219.

Nayar, S.

L. Zhang and S. Nayar, “Projection defocus analysis for scene capture and image display,” ACM Trans. Graph. 25(3), 907–915 (2006).
[Crossref]

Nayar, S. K.

M. Gupta and S. K. Nayar, “Micro phase shifting,” in IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2012), pp. 813–820.

Ng, R.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Comput. Sci. Tech. Rep. 2, 1–11 (2005).

Nicolas, M.

F. Crete, T. Dolmiere, P. Ladret, and M. Nicolas, “The blur effect: perception and estimation with a new no-reference perceptual blur metric,” in Human Vision and Electronic Imaging (International Society for Optics and Photonics, 2007), p. 64920I.

Ono, S.

H. Kawasaki, S. Ono, Y. Horita, Y. Shiba, R. Furukawa, and S. Hiura, “Active one-shot scan for wide depth range using a light field projector based on coded aperture,” in Proceedings of the IEEE International Conference on Computer Vision, (IEEE, 2015), pp. 3568–3576.

Peseux, J.

B. Berge and J. Peseux, “Variable focal lens controlled by an external voltage: An application of electrowetting,” Eur. Phys. J. E: Soft Matter Biol. Phys. 3(2), 159–163 (2000).
[Crossref]

Pham, D.

Rao, G.

Ren, H.

H. Ren and S.-T. Wu, Introduction to adaptive lenses, vol. 75 (John Wiley & Sons, 2012).

Shamai, R.

R. Shamai, D. Andelman, B. Berge, and R. Hayes, “Water, electricity, and between$\cdots$⋯ on electrowetting and its applications,” Soft Matter 4(1), 38–45 (2008).
[Crossref]

Shiba, Y.

H. Kawasaki, S. Ono, Y. Horita, Y. Shiba, R. Furukawa, and S. Hiura, “Active one-shot scan for wide depth range using a light field projector based on coded aperture,” in Proceedings of the IEEE International Conference on Computer Vision, (IEEE, 2015), pp. 3568–3576.

Song, L.

Subbarao, M.

M. Subbarao and G. Surya, “Depth from defocus: a spatial domain approach,” Int. J. Comput. Vis. 13(3), 271–294 (1994).
[Crossref]

Sun, J.

J. Sun, C. Zuo, S. Feng, S. Yu, Y. Zhang, and Q. Chen, “Improved intensity-optimized dithering technique for 3d shape measurement,” Opt. Lasers Eng. 66, 158–164 (2015).
[Crossref]

Surya, G.

M. Subbarao and G. Surya, “Depth from defocus: a spatial domain approach,” Int. J. Comput. Vis. 13(3), 271–294 (1994).
[Crossref]

Wang, G.

G. Wang, W. Li, X. Yin, and H. Yang, “All-in-focus with directional-max-gradient flow and labeled iterative depth propagation,” Pattern Recognit. 77, 173–187 (2018).
[Crossref]

Wang, Y.

Wu, F.

Y. Zhang, Z. Xiong, P. Cong, and F. Wu, “Robust depth sensing with adaptive structured light illumination,” J. Vis. Commun. Image Represent. 25(4), 649–658 (2014).
[Crossref]

Wu, S.-T.

H. Ren and S.-T. Wu, Introduction to adaptive lenses, vol. 75 (John Wiley & Sons, 2012).

Xiong, Z.

Y. Zhang, Z. Xiong, P. Cong, and F. Wu, “Robust depth sensing with adaptive structured light illumination,” J. Vis. Commun. Image Represent. 25(4), 649–658 (2014).
[Crossref]

Xu, J.

Yang, H.

G. Wang, W. Li, X. Yin, and H. Yang, “All-in-focus with directional-max-gradient flow and labeled iterative depth propagation,” Pattern Recognit. 77, 173–187 (2018).
[Crossref]

Yang, X.

Yin, X.

G. Wang, W. Li, X. Yin, and H. Yang, “All-in-focus with directional-max-gradient flow and labeled iterative depth propagation,” Pattern Recognit. 77, 173–187 (2018).
[Crossref]

Yu, S.

J. Sun, C. Zuo, S. Feng, S. Yu, Y. Zhang, and Q. Chen, “Improved intensity-optimized dithering technique for 3d shape measurement,” Opt. Lasers Eng. 66, 158–164 (2015).
[Crossref]

Zakhor, A.

R. R. Garcia and A. Zakhor, “Selection of temporally dithered codes for increasing virtual depth of field in structured light systems,” in IEEE Conference on Computer Vision and Pattern Recognition-Workshops, (IEEE, 2010), pp. 88–95.

Zhang, L.

L. Zhang and S. Nayar, “Projection defocus analysis for scene capture and image display,” ACM Trans. Graph. 25(3), 907–915 (2006).
[Crossref]

Zhang, S.

Zhang, Y.

J. Sun, C. Zuo, S. Feng, S. Yu, Y. Zhang, and Q. Chen, “Improved intensity-optimized dithering technique for 3d shape measurement,” Opt. Lasers Eng. 66, 158–164 (2015).
[Crossref]

Y. Zhang, Z. Xiong, P. Cong, and F. Wu, “Robust depth sensing with adaptive structured light illumination,” J. Vis. Commun. Image Represent. 25(4), 649–658 (2014).
[Crossref]

Zuo, C.

J. Sun, C. Zuo, S. Feng, S. Yu, Y. Zhang, and Q. Chen, “Improved intensity-optimized dithering technique for 3d shape measurement,” Opt. Lasers Eng. 66, 158–164 (2015).
[Crossref]

ACM Trans. Graph. (2)

L. Zhang and S. Nayar, “Projection defocus analysis for scene capture and image display,” ACM Trans. Graph. 25(3), 907–915 (2006).
[Crossref]

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26(3), 70 (2007).
[Crossref]

Appl. Opt. (3)

Comput. Sci. Tech. Rep. (1)

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Comput. Sci. Tech. Rep. 2, 1–11 (2005).

Eur. Phys. J. E: Soft Matter Biol. Phys. (1)

B. Berge and J. Peseux, “Variable focal lens controlled by an external voltage: An application of electrowetting,” Eur. Phys. J. E: Soft Matter Biol. Phys. 3(2), 159–163 (2000).
[Crossref]

IEEE Trans. Pattern Anal. Machine Intell. (1)

S.-H. Lai, C.-W. Fu, and S. Chang, “A generalized depth estimation algorithm with a single image,” IEEE Trans. Pattern Anal. Machine Intell. 14(4), 405–411 (1992).
[Crossref]

Int. J. Comput. Vis. (1)

M. Subbarao and G. Surya, “Depth from defocus: a spatial domain approach,” Int. J. Comput. Vis. 13(3), 271–294 (1994).
[Crossref]

J. Vis. Commun. Image Represent. (1)

Y. Zhang, Z. Xiong, P. Cong, and F. Wu, “Robust depth sensing with adaptive structured light illumination,” J. Vis. Commun. Image Represent. 25(4), 649–658 (2014).
[Crossref]

Opt. Express (3)

Opt. Lasers Eng. (1)

J. Sun, C. Zuo, S. Feng, S. Yu, Y. Zhang, and Q. Chen, “Improved intensity-optimized dithering technique for 3d shape measurement,” Opt. Lasers Eng. 66, 158–164 (2015).
[Crossref]

Opt. Lett. (1)

Pattern Recognit. (1)

G. Wang, W. Li, X. Yin, and H. Yang, “All-in-focus with directional-max-gradient flow and labeled iterative depth propagation,” Pattern Recognit. 77, 173–187 (2018).
[Crossref]

Soft Matter (1)

R. Shamai, D. Andelman, B. Berge, and R. Hayes, “Water, electricity, and between$\cdots$⋯ on electrowetting and its applications,” Soft Matter 4(1), 38–45 (2008).
[Crossref]

Transactions on Electr. Electron. Mater. (1)

H.-C. Lin, M.-S. Chen, and Y.-H. Lin, “A review of electrically tunable focusing liquid crystal lenses,” Transactions on Electr. Electron. Mater. 12(6), 234–240 (2011).
[Crossref]

Other (10)

H. Ren and S.-T. Wu, Introduction to adaptive lenses, vol. 75 (John Wiley & Sons, 2012).

Optotune, “EL-10-30 datasheet,” https://www.optotune.com/images/products/Optotune%20EL-10-30.pdf// .

F. Crete, T. Dolmiere, P. Ladret, and M. Nicolas, “The blur effect: perception and estimation with a new no-reference perceptual blur metric,” in Human Vision and Electronic Imaging (International Society for Optics and Photonics, 2007), p. 64920I.

R. R. Garcia and A. Zakhor, “Selection of temporally dithered codes for increasing virtual depth of field in structured light systems,” in IEEE Conference on Computer Vision and Pattern Recognition-Workshops, (IEEE, 2010), pp. 88–95.

M. Blum, M. Büeler, C. Grätzel, and M. Aschwanden, “Compact optical design solutions using focus tunable lenses,” in Optical Design and Engineering IV, vol. 8167 (International Society for Optics and Photonics, 2011), p. 81670W.

D. Malacara, Optical shop testing (John Wiley & Sons, 2007).

S. Zhang, High-Speed 3D imaging with digital fringe projection techniques (CRC University, 2016).

M. Gupta and S. K. Nayar, “Micro phase shifting,” in IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2012), pp. 813–820.

S. Achar and S. G. Narasimhan, “Multi focus structured light for recovering scene shape and global illumination,” in European Conference on Computer Vision, (Springer, 2014), pp. 205–219.

H. Kawasaki, S. Ono, Y. Horita, Y. Shiba, R. Furukawa, and S. Hiura, “Active one-shot scan for wide depth range using a light field projector based on coded aperture,” in Proceedings of the IEEE International Conference on Computer Vision, (IEEE, 2015), pp. 3568–3576.

Supplementary Material (7)

NameDescription
» Visualization 1       Visualization 1
» Visualization 2       Visualization 2
» Visualization 3       Visualization 3
» Visualization 4       Visualization 4
» Visualization 5       Visualization 5
» Visualization 6       Visualization 6
» Visualization 7       Visualization 7

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Schematic diagram of the proposed large DOF 3D shape measurement system.
Fig. 2.
Fig. 2. Timing chart of the proposed large DOF 3D shape measurement system. Here $f_c^i$ represents the $i^{th} (1\leq i\leq M)$ focal distance,  $t_{i-1}^{i}$ represents the ETL settling time when changing from focal distance $f_c^{i-1}$ to $f_c^i$,  $t_c^{exp}$ represents the exposure time for both the camera and projector,  $T_c$ represents the period of the trigger signal,  $L_i$ represents the number of captured images at focal distance $i$, and $T$ is the total 3D reconstruction time for the entire depth range.
Fig. 3.
Fig. 3. Computational framework of our proposed phase unwrapping algorithm.
Fig. 4.
Fig. 4. 12 repeated sphere measurements. (a) and (d) sphere center for the sphere at $z_0^1 = 455$ mm ( $\sigma _{x_0}$ = 0.043 mm, $\sigma _{y_0}$ = 0.033 mm, $\sigma _{z_0}$ = 0.166 mm and $\sigma _{r_0}$ = 0.099 mm); (b) and (e) sphere center for the sphere at $z_0^2 = 847.5$ mm ($\sigma _{x_0}$ = 0.071 mm, $\sigma _{y_0}$ = 0.055 mm, $\sigma _{z_0}$ = 0.392 mm and $\sigma _{r_0}$ = 0.216 mm); (c) and (f) sphere center for the sphere at $z_0^3 = 1378$ mm ($\sigma _{x_0}$ =0.102 mm, $\sigma _{y_0}$ = 0.207 mm, $\sigma _{z_0}$ = 0.328 mm and $\sigma _{r_0}$ = 0.202 mm).
Fig. 5.
Fig. 5. Dynamic properties of the liquid lens. (a) Photograph of the projected binary pattern taken at $f_c^1$ and selected cross-section, $\sigma$ = 0.456; (b) photograph of the projected binary pattern taken at $f_c^2$ and selected cross-section, $\sigma$ = 0.657; (c) photograph of the projected binary pattern taken at $f_c^3$ and selected cross-section, $\sigma$ = 0.723; (d) dynamic response curve of $\sigma$ when liquid lens switched from (a) to (b), target axial position was reached after 15 ms (indicated by black ticks, associated Visualization 1); (e) dynamic response curve of $\sigma$ when liquid lens switched from (b) to (c), target axial position was reached after 8 ms (indicated by black ticks, associated Visualization 2); (f) dynamic response curve of $\sigma$ when liquid lens switched from (c) to (a), target axial position was reached after 9 ms (indicated by black ticks, associated Visualization 3)
Fig. 6.
Fig. 6. Photograph of the scene with different focal settings. (a) The camera focal distance at $f_c^1$ = 450 mm; (b)–(d) the close-up views of each individual statue for the focal settings used in (a); (e) the camera focal distance at $f_c^2$ = 800 mm; (f)–(h) the close-up views of each individual statue for the focal settings used in (e); (i) the camera focal distance at $f_c^3$ = 1300 mm; (j)–(l) the close-up views of each individual statue for the focal settings used in (i). Matlab surf() function was used and the color represents depth with red being closer and blue being further away. The corresponding depth range is [1220, 1450] mm for the second column images, and [820, 890] mm for the third column images, and [470, 510] mm for the fourth column images.
Fig. 7.
Fig. 7. Measurement results of three spheres at different locations. The first row shows the results of the first sphere (diameter = 40 mm at 450 mm) when the camera changes its focal distance from $f_c^1$ to $f_c^3$; The second row shows the corresponding results of the second sphere (diameter = 80 mm at 800 mm); and the third row shows the results for the third sphere (diameter = 200 mm at 1300 mm). The first, third, and fifth column shows the 3D reconstruction, and the second, fourth and sixth column shows the corresponding error map of the 3D result on the previous column. Similar to Fig. 6, the color represents depth information. The corresponding depth ranges are [415, 460], [790, 870], [1150, 1400] mm for the first, second and third row images, respectively.
Fig. 8.
Fig. 8. Measurement results of moving objects. (a) 3D result when camera is focused at position 1 (associated Visualization 4); (b) 3D result when camera is focused at position 2 (associated Visualization 5); (c) 3D result when camera is focused at position 3 (associated Visualization 6); (d) 3D result obtained from our proposed method (associated Visualization 7).

Tables (1)

Tables Icon

Table 1. Measurement mean error and RMSE of sphere shown in Fig. 7 (mm)

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

I n ( x , y ) = A ( x , y ) + B ( x , y ) cos [ ϕ ( x , y ) δ n ( x , y ) ] ,
ϕ ( x , y ) = tan 1 [ n = 1 N I n ( x , y ) sin δ n n = 1 N I n ( x , y ) cos δ n ] .
Φ ( x , y ) = ϕ ( x , y ) + k ( x , y ) × 2 π .
A ( x , y ) = [ n = 1 N I n ( x , y ) N ] .
F = 1 T = { 1 L 1 × T c , M = 1 1 t M 1 + L 1 × T c + i = 2 M ( t i 1 i + L i × T c ) , M 2 ,
ϕ e q i = { ( ϕ i ϕ i + 1 ) mod 2 π   , i = 1 ( ϕ i ϕ i 1 ) mod 2 π   , 1 < i M ,
λ e q i = { | λ i λ i + 1 λ i λ i + 1 |   , i = 1 | λ i λ i 1 λ i λ i 1 |   , 1 < i M
Φ 0 ( x , y ) = g ( P c , P p , λ 0 , z 0 ) ,
z i = h ( f c i ) ,
Φ min i ( x , y ) = g ( P c , P p , λ e q i , h min ( f c i ) ) ,
K e q i ( x , y ) = ceil [ ϕ e q i ( x , y ) Φ min i ( x , y ) 2 π ] ,
Φ e q i ( x , y ) = ϕ e q i ( x , y ) + K e q i ( x , y ) × 2 π .
Φ i ( x , y ) = ϕ i ( x , y ) + K i ( x , y ) × 2 π ,
K i ( x , y ) = Round [ Φ e q i ( x , y ) × λ e q i / λ i ϕ i ( x , y ) 2 π ] .