Abstract

We propose a method for the passive-type estimation of depth using a focus tunable lens. The proposed method utilizes a lens group and a focus tunable lens and charge coupled device. The target object is imaged by the group of lenses and measured by changing the focal length of the focus tunable lens. The method aims to measure depth information in the image domain and not in the object domain. An autofocusing algorithm finds the best focus position of the target object through the focus value calculated by the Sobel operator. We believe that the proposed method is applicable to depth measurement systems because it offers a simple configuration without any active light source and can operate in real time. The experiment, performed for comparison with theoretical calculations, confirms the feasibility of the proposed method.

© 2019 Optical Society of America

Full Article  |  PDF Article
OSA Recommended Articles
Depth estimation method using depth-of-field imaging with a retroreflector

Sungwon Choi and Sung-wook Min
Opt. Express 26(5) 5655-5664 (2018)

References

  • View by:
  • |
  • |
  • |

  1. J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photon. 3, 128–160 (2011).
    [Crossref]
  2. Z. Cai, X. Liu, X. Peng, Y. Yin, A. Li, J. Wu, and B. Z. Gao, “Structured light field 3D imaging,” Opt. Express 24, 20324–20334 (2016).
    [Crossref]
  3. Z. Cai, X. Liu, G. Pedrini, W. Osten, and X. Peng, “Accurate depth estimation in structured light fields,” Opt. Express 27, 13532–13546 (2019).
    [Crossref]
  4. S. L. X. Francis, S. G. Anavatti, M. Garratt, and H. Shim, “A ToF-camera as a 3D vision sensor for autonomous mobile robotics,” Int. J. Adv. Robot. Syst. 12, 156 (2015).
    [Crossref]
  5. M. C. Amann, T. Bosch, M. Lescure, R. Myllyla, and M. Rioux, “Laser ranging: a critical review of usual technique for distance measurement,” Opt. Eng. 40, 10–19 (2001).
    [Crossref]
  6. D. Murray and J. J. Little, “Using real-time stereo vision for mobile robot navigation,” Auton. Robots 8, 161–171 (2000).
    [Crossref]
  7. C. Holzmann and M. Hochgatterer, “Measuring distance with mobile phones using single-camera stereo vision,” in Proceedings of International Conference on Distributed Computing Systems Workshops (Academic, 2012), pp. 88–93.
  8. S. K. Nayar and Y. Nakagawa, “Shape from focus: an effective approach for rough surfaces,” in Proceedings of IEEE International Conference on Robotics and Automation (Academic, 1990), pp. 218–225.
  9. Y. Xiong and S. A. Shafer, “Depth from focusing and defocusing,” in Proceedings of IEEE Computer Vision and Pattern Recognition (Academic, 1993), pp. 68–73.
  10. C. Zhou, D. Miau, and S. K. Nayar, Focal Sweep Camera for Space-Time Refocusing (Department of Computer Science, 2012).
  11. K. Kutulakos and S. Hasinoff, “Focal stack photography: High-performance photography with a conventional camera,” in Proceedings of IAPR Conference on Machine Vision Applications (Academic, 2009), pp. 332–337.
  12. V. Aslantas and D. T. Pham, “Depth from automatic defocusing,” Opt. Express 15, 1011–1023 (2007).
    [Crossref]
  13. P. Favaro and S. Soatto, “A geometric approach to shape from defocus,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 406–417 (2005).
    [Crossref]
  14. J. N. P. Martel, L. K. Muller, S. J. Carey, and P. Dudek, “High-speed depth from focus on a programmable vision chip using a focus tunable lens,” in Proceedings of IEEE International Symposium on Circuits and Systems (IEEE, 2017).
  15. S. H. Mo, Research on Edge Detection and Its Evaluation (Tianjin University, 2011).
  16. J. He, R. Zhou, and Z. Hong, “Modified fast climbing search auto-focus algorithm with adaptive step size searching technique for digital camera,” IEEE Trans. Consum. Electron. 49, 257–262 (2003).
    [Crossref]
  17. X. Xu, Y. Wang, J. Tang, X. Zhang, and X. Liu, “Robust automatic focus algorithm for low contrast images using a new contrast measure,” Sensors 11, 8281–8294 (2011).
    [Crossref]

2019 (1)

2016 (1)

2015 (1)

S. L. X. Francis, S. G. Anavatti, M. Garratt, and H. Shim, “A ToF-camera as a 3D vision sensor for autonomous mobile robotics,” Int. J. Adv. Robot. Syst. 12, 156 (2015).
[Crossref]

2011 (2)

J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photon. 3, 128–160 (2011).
[Crossref]

X. Xu, Y. Wang, J. Tang, X. Zhang, and X. Liu, “Robust automatic focus algorithm for low contrast images using a new contrast measure,” Sensors 11, 8281–8294 (2011).
[Crossref]

2007 (1)

2005 (1)

P. Favaro and S. Soatto, “A geometric approach to shape from defocus,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 406–417 (2005).
[Crossref]

2003 (1)

J. He, R. Zhou, and Z. Hong, “Modified fast climbing search auto-focus algorithm with adaptive step size searching technique for digital camera,” IEEE Trans. Consum. Electron. 49, 257–262 (2003).
[Crossref]

2001 (1)

M. C. Amann, T. Bosch, M. Lescure, R. Myllyla, and M. Rioux, “Laser ranging: a critical review of usual technique for distance measurement,” Opt. Eng. 40, 10–19 (2001).
[Crossref]

2000 (1)

D. Murray and J. J. Little, “Using real-time stereo vision for mobile robot navigation,” Auton. Robots 8, 161–171 (2000).
[Crossref]

Amann, M. C.

M. C. Amann, T. Bosch, M. Lescure, R. Myllyla, and M. Rioux, “Laser ranging: a critical review of usual technique for distance measurement,” Opt. Eng. 40, 10–19 (2001).
[Crossref]

Anavatti, S. G.

S. L. X. Francis, S. G. Anavatti, M. Garratt, and H. Shim, “A ToF-camera as a 3D vision sensor for autonomous mobile robotics,” Int. J. Adv. Robot. Syst. 12, 156 (2015).
[Crossref]

Aslantas, V.

Bosch, T.

M. C. Amann, T. Bosch, M. Lescure, R. Myllyla, and M. Rioux, “Laser ranging: a critical review of usual technique for distance measurement,” Opt. Eng. 40, 10–19 (2001).
[Crossref]

Cai, Z.

Carey, S. J.

J. N. P. Martel, L. K. Muller, S. J. Carey, and P. Dudek, “High-speed depth from focus on a programmable vision chip using a focus tunable lens,” in Proceedings of IEEE International Symposium on Circuits and Systems (IEEE, 2017).

Dudek, P.

J. N. P. Martel, L. K. Muller, S. J. Carey, and P. Dudek, “High-speed depth from focus on a programmable vision chip using a focus tunable lens,” in Proceedings of IEEE International Symposium on Circuits and Systems (IEEE, 2017).

Favaro, P.

P. Favaro and S. Soatto, “A geometric approach to shape from defocus,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 406–417 (2005).
[Crossref]

Francis, S. L. X.

S. L. X. Francis, S. G. Anavatti, M. Garratt, and H. Shim, “A ToF-camera as a 3D vision sensor for autonomous mobile robotics,” Int. J. Adv. Robot. Syst. 12, 156 (2015).
[Crossref]

Gao, B. Z.

Garratt, M.

S. L. X. Francis, S. G. Anavatti, M. Garratt, and H. Shim, “A ToF-camera as a 3D vision sensor for autonomous mobile robotics,” Int. J. Adv. Robot. Syst. 12, 156 (2015).
[Crossref]

Geng, J.

Hasinoff, S.

K. Kutulakos and S. Hasinoff, “Focal stack photography: High-performance photography with a conventional camera,” in Proceedings of IAPR Conference on Machine Vision Applications (Academic, 2009), pp. 332–337.

He, J.

J. He, R. Zhou, and Z. Hong, “Modified fast climbing search auto-focus algorithm with adaptive step size searching technique for digital camera,” IEEE Trans. Consum. Electron. 49, 257–262 (2003).
[Crossref]

Hochgatterer, M.

C. Holzmann and M. Hochgatterer, “Measuring distance with mobile phones using single-camera stereo vision,” in Proceedings of International Conference on Distributed Computing Systems Workshops (Academic, 2012), pp. 88–93.

Holzmann, C.

C. Holzmann and M. Hochgatterer, “Measuring distance with mobile phones using single-camera stereo vision,” in Proceedings of International Conference on Distributed Computing Systems Workshops (Academic, 2012), pp. 88–93.

Hong, Z.

J. He, R. Zhou, and Z. Hong, “Modified fast climbing search auto-focus algorithm with adaptive step size searching technique for digital camera,” IEEE Trans. Consum. Electron. 49, 257–262 (2003).
[Crossref]

Kutulakos, K.

K. Kutulakos and S. Hasinoff, “Focal stack photography: High-performance photography with a conventional camera,” in Proceedings of IAPR Conference on Machine Vision Applications (Academic, 2009), pp. 332–337.

Lescure, M.

M. C. Amann, T. Bosch, M. Lescure, R. Myllyla, and M. Rioux, “Laser ranging: a critical review of usual technique for distance measurement,” Opt. Eng. 40, 10–19 (2001).
[Crossref]

Li, A.

Little, J. J.

D. Murray and J. J. Little, “Using real-time stereo vision for mobile robot navigation,” Auton. Robots 8, 161–171 (2000).
[Crossref]

Liu, X.

Martel, J. N. P.

J. N. P. Martel, L. K. Muller, S. J. Carey, and P. Dudek, “High-speed depth from focus on a programmable vision chip using a focus tunable lens,” in Proceedings of IEEE International Symposium on Circuits and Systems (IEEE, 2017).

Miau, D.

C. Zhou, D. Miau, and S. K. Nayar, Focal Sweep Camera for Space-Time Refocusing (Department of Computer Science, 2012).

Mo, S. H.

S. H. Mo, Research on Edge Detection and Its Evaluation (Tianjin University, 2011).

Muller, L. K.

J. N. P. Martel, L. K. Muller, S. J. Carey, and P. Dudek, “High-speed depth from focus on a programmable vision chip using a focus tunable lens,” in Proceedings of IEEE International Symposium on Circuits and Systems (IEEE, 2017).

Murray, D.

D. Murray and J. J. Little, “Using real-time stereo vision for mobile robot navigation,” Auton. Robots 8, 161–171 (2000).
[Crossref]

Myllyla, R.

M. C. Amann, T. Bosch, M. Lescure, R. Myllyla, and M. Rioux, “Laser ranging: a critical review of usual technique for distance measurement,” Opt. Eng. 40, 10–19 (2001).
[Crossref]

Nakagawa, Y.

S. K. Nayar and Y. Nakagawa, “Shape from focus: an effective approach for rough surfaces,” in Proceedings of IEEE International Conference on Robotics and Automation (Academic, 1990), pp. 218–225.

Nayar, S. K.

C. Zhou, D. Miau, and S. K. Nayar, Focal Sweep Camera for Space-Time Refocusing (Department of Computer Science, 2012).

S. K. Nayar and Y. Nakagawa, “Shape from focus: an effective approach for rough surfaces,” in Proceedings of IEEE International Conference on Robotics and Automation (Academic, 1990), pp. 218–225.

Osten, W.

Pedrini, G.

Peng, X.

Pham, D. T.

Rioux, M.

M. C. Amann, T. Bosch, M. Lescure, R. Myllyla, and M. Rioux, “Laser ranging: a critical review of usual technique for distance measurement,” Opt. Eng. 40, 10–19 (2001).
[Crossref]

Shafer, S. A.

Y. Xiong and S. A. Shafer, “Depth from focusing and defocusing,” in Proceedings of IEEE Computer Vision and Pattern Recognition (Academic, 1993), pp. 68–73.

Shim, H.

S. L. X. Francis, S. G. Anavatti, M. Garratt, and H. Shim, “A ToF-camera as a 3D vision sensor for autonomous mobile robotics,” Int. J. Adv. Robot. Syst. 12, 156 (2015).
[Crossref]

Soatto, S.

P. Favaro and S. Soatto, “A geometric approach to shape from defocus,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 406–417 (2005).
[Crossref]

Tang, J.

X. Xu, Y. Wang, J. Tang, X. Zhang, and X. Liu, “Robust automatic focus algorithm for low contrast images using a new contrast measure,” Sensors 11, 8281–8294 (2011).
[Crossref]

Wang, Y.

X. Xu, Y. Wang, J. Tang, X. Zhang, and X. Liu, “Robust automatic focus algorithm for low contrast images using a new contrast measure,” Sensors 11, 8281–8294 (2011).
[Crossref]

Wu, J.

Xiong, Y.

Y. Xiong and S. A. Shafer, “Depth from focusing and defocusing,” in Proceedings of IEEE Computer Vision and Pattern Recognition (Academic, 1993), pp. 68–73.

Xu, X.

X. Xu, Y. Wang, J. Tang, X. Zhang, and X. Liu, “Robust automatic focus algorithm for low contrast images using a new contrast measure,” Sensors 11, 8281–8294 (2011).
[Crossref]

Yin, Y.

Zhang, X.

X. Xu, Y. Wang, J. Tang, X. Zhang, and X. Liu, “Robust automatic focus algorithm for low contrast images using a new contrast measure,” Sensors 11, 8281–8294 (2011).
[Crossref]

Zhou, C.

C. Zhou, D. Miau, and S. K. Nayar, Focal Sweep Camera for Space-Time Refocusing (Department of Computer Science, 2012).

Zhou, R.

J. He, R. Zhou, and Z. Hong, “Modified fast climbing search auto-focus algorithm with adaptive step size searching technique for digital camera,” IEEE Trans. Consum. Electron. 49, 257–262 (2003).
[Crossref]

Adv. Opt. Photon. (1)

Auton. Robots (1)

D. Murray and J. J. Little, “Using real-time stereo vision for mobile robot navigation,” Auton. Robots 8, 161–171 (2000).
[Crossref]

IEEE Trans. Consum. Electron. (1)

J. He, R. Zhou, and Z. Hong, “Modified fast climbing search auto-focus algorithm with adaptive step size searching technique for digital camera,” IEEE Trans. Consum. Electron. 49, 257–262 (2003).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

P. Favaro and S. Soatto, “A geometric approach to shape from defocus,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 406–417 (2005).
[Crossref]

Int. J. Adv. Robot. Syst. (1)

S. L. X. Francis, S. G. Anavatti, M. Garratt, and H. Shim, “A ToF-camera as a 3D vision sensor for autonomous mobile robotics,” Int. J. Adv. Robot. Syst. 12, 156 (2015).
[Crossref]

Opt. Eng. (1)

M. C. Amann, T. Bosch, M. Lescure, R. Myllyla, and M. Rioux, “Laser ranging: a critical review of usual technique for distance measurement,” Opt. Eng. 40, 10–19 (2001).
[Crossref]

Opt. Express (3)

Sensors (1)

X. Xu, Y. Wang, J. Tang, X. Zhang, and X. Liu, “Robust automatic focus algorithm for low contrast images using a new contrast measure,” Sensors 11, 8281–8294 (2011).
[Crossref]

Other (7)

J. N. P. Martel, L. K. Muller, S. J. Carey, and P. Dudek, “High-speed depth from focus on a programmable vision chip using a focus tunable lens,” in Proceedings of IEEE International Symposium on Circuits and Systems (IEEE, 2017).

S. H. Mo, Research on Edge Detection and Its Evaluation (Tianjin University, 2011).

C. Holzmann and M. Hochgatterer, “Measuring distance with mobile phones using single-camera stereo vision,” in Proceedings of International Conference on Distributed Computing Systems Workshops (Academic, 2012), pp. 88–93.

S. K. Nayar and Y. Nakagawa, “Shape from focus: an effective approach for rough surfaces,” in Proceedings of IEEE International Conference on Robotics and Automation (Academic, 1990), pp. 218–225.

Y. Xiong and S. A. Shafer, “Depth from focusing and defocusing,” in Proceedings of IEEE Computer Vision and Pattern Recognition (Academic, 1993), pp. 68–73.

C. Zhou, D. Miau, and S. K. Nayar, Focal Sweep Camera for Space-Time Refocusing (Department of Computer Science, 2012).

K. Kutulakos and S. Hasinoff, “Focal stack photography: High-performance photography with a conventional camera,” in Proceedings of IAPR Conference on Machine Vision Applications (Academic, 2009), pp. 332–337.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Schematic for the proposed method.
Fig. 2.
Fig. 2. Depth of focus and depth of field in the optical system.
Fig. 3.
Fig. 3. Simulation setup of the conventional DFF and the proposed method. A fixed focal length lens is used to create the object image to be imaged in the image sensor plane. A detailed description of the simulation parameters is given in Table 1.
Fig. 4.
Fig. 4. Comparison of depth of field values modeled using the conventional DFF method and the proposed method versus the distance to the object. (a) Simulation results for the focal length of the lens group2 of 100 mm. (b) Simulation results for the focal length of the lens group2 of 200 mm.
Fig. 5.
Fig. 5. Principle of the autofocusing algorithm with a contrast detection method.
Fig. 6.
Fig. 6. Workflow of the proposed method.
Fig. 7.
Fig. 7. Experimental setup of the proposed method. (a) Proposed system in real mode, where the image domain is located between the lens group1 and group2. (b) Proposed system in virtual mode, where the image domain is outside the proposed system. (c) Overall experimental setup. (d) Resolution target as an object.
Fig. 8.
Fig. 8. Simulation and experimental results for the depth accuracy in the proposed method. (a) Depth of focus at the image distance of 59.7 mm in Zemax. (b) Step size in the working distance from the focus tunable lens. (c) Comparison of the focus value with the step size by using the motorized stage and focus tunable lens.
Fig. 9.
Fig. 9. Experimental results of the proposed method. RMS is the root mean square. (a) Focal length of the convex lens is 62.67 mm, and the image distance is 70 mm. (b) Focal length of the convex lens is 124.54 mm, and the image distance is 140 mm. (c) Experimental results in virtual mode. Focal length of the convex lens is 74.72 mm , and the image distance is 70 mm .

Tables (2)

Tables Icon

Table 1. Simulation Setup for Comparison between Conventional DFF and the Proposed Method

Tables Icon

Table 2. Specifications for the Simulation and Experimental Setup

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

DOF = 2 u 2 N C f 2 ,
DOF Front / Back = u 2 C f 2 ± u 2 N C DOF Total = DOF Front + DOF Back ,
PSF ( x img , y img ) = FFT { p ( x pupil , y pupil ) × exp ( i k W ( x pupil , y pupil ) ) } ,
d z far = argmax ( MTF 0.5 ( d z ) ) d z near = argmin ( MTF 0.5 ( d z ) ) dof = | d z near | + | d z far | ,
S x = 1 8 [ 1 0 1 2 0 2 1 0 1 ] , S y = 1 8 [ 1 2 1 0 0 0 1 2 1 ] .

Metrics