Abstract

This paper presents a novel technique to achieve autofocusing for a three-dimensional (3D) profilometry system with dual projectors. The proposed system uses a camera that is attached with an electronically focus-tunable lens (ETL) that allows dynamic change of camera’s focal plane such that the camera can focus on the object; the camera captures fringe patterns projected by each projector to establish corresponding points between two projectors, and two pre-calibrated projectors form triangulation for 3D reconstruction. We pre-calibrate the relationship between the depth and the current being used for each focal plane, perform a 3D shape measurement with an unknown focus level, and calculate the desired current value based on the initial 3D result. We developed a prototype system that can automatically focus on an object positioned between 450 mm to 850 mm.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Calibration method for projector-camera-based telecentric fringe projection profilometry system

Haibo Liu, Huijing Lin, and Linshen Yao
Opt. Express 25(25) 31492-31508 (2017)

Large depth-of-field 3D shape measurement using an electrically tunable lens

Xiaowei Hu, Guijin Wang, Yujin Zhang, Huazhong Yang, and Song Zhang
Opt. Express 27(21) 29697-29709 (2019)

References

  • View by:
  • |
  • |
  • |

  1. S. Zhang, “Rapid and automatic optimal exposure control for digital fringe projection technique,” Opt. Lasers Eng. 128, 106029 (2020).
    [Crossref]
  2. S. Zhang, “High-speed 3d shape measurement with structured light methods: A review,” Opt. Lasers Eng. 106, 119–131 (2018).
    [Crossref]
  3. B. Li and S. Zhang, “Microscopic structured light 3d profilometry: Binary defocusing technique vs. sinusoidal fringe projection,” Opt. Lasers Eng. 96, 117–123 (2017).
    [Crossref]
  4. L. Zhang and S. Nayar, “Projection defocus analysis for scene capture and image display,” in ACM SIGGRAPH 2006 Papers, (2006), pp. 907–915.
  5. J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recogn. 43(8), 2666–2680 (2010).
    [Crossref]
  6. Y. Zhang, Z. Xiong, P. Cong, and F. Wu, “Robust depth sensing with adaptive structured light illumination,” J. Vis. Commun. Image Represent. 25(4), 649–658 (2014).
    [Crossref]
  7. S. Achar and S. G. Narasimhan, “Multi focus structured light for recovering scene shape and global illumination,” in European Conference on Computer Vision, (Springer, 2014), pp. 205–219.
  8. A. Sauceda and J. Ojeda-Castañeda, “High focal depth with fractional-power wave fronts,” Opt. Lett. 29(6), 560–562 (2004).
    [Crossref]
  9. V. N. Le, S. Chen, and Z. Fan, “Optimized asymmetrical tangent phase mask to obtain defocus invariant modulation transfer function in incoherent imaging systems,” Opt. Lett. 39(7), 2171–2174 (2014).
    [Crossref]
  10. Y. Wu, L. Dong, Y. Zhao, M. Liu, X. Chu, W. Jia, X. Guo, and Y. Feng, “Analysis of wavefront coding imaging with cubic phase mask decenter and tilt,” Appl. Opt. 55(25), 7009–7017 (2016).
    [Crossref]
  11. J. R. Alonso, A. Fernández, G. A. Ayubi, and J. A. Ferrari, “All-in-focus image reconstruction under severe defocus,” Opt. Lett. 40(8), 1671–1674 (2015).
    [Crossref]
  12. O.-J. Kwon, S. Choi, D. Jang, and H.-S. Pang, “All-in-focus imaging using average filter-based relative focus measure,” Digit. Signal Process. 60, 200–210 (2017).
    [Crossref]
  13. V. Aslantas and D. Pham, “Depth from automatic defocusing,” Opt. Express 15(3), 1011–1023 (2007).
    [Crossref]
  14. S. Murata and M. Kawamura, “Particle depth measurement based on depth-from-defocus,” Opt. Laser Technol. 31(1), 95–102 (1999).
    [Crossref]
  15. J. M. Jabbour, B. H. Malik, C. Olsovsky, R. Cuenca, S. Cheng, J. A. Jo, Y.-S. L. Cheng, J. M. Wright, and K. C. Maitland, “Optical axial scanning in confocal microscopy using an electrically tunable lens,” Biomed. Opt. Express 5(2), 645–652 (2014).
    [Crossref]
  16. C. Zuo, Q. Chen, W. Qu, and A. Asundi, “High-speed transport-of-intensity phase microscopy with an electrically tunable lens,” Opt. Express 21(20), 24060–24075 (2013).
    [Crossref]
  17. H. Li, J. Peng, F. Pan, Y. Wu, Y. Zhang, and X. Xie, “Focal stack camera in all-in-focus imaging via an electrically tunable liquid crystal lens doped with multi-walled carbon nanotubes,” Opt. Express 26(10), 12441–12454 (2018).
    [Crossref]
  18. X. Shen and B. Javidi, “Large depth of focus dynamic micro integral imaging for optical see-through augmented reality display using a focus-tunable lens,” Appl. Opt. 57(7), B184–B189 (2018).
    [Crossref]
  19. X. Hu, G. Wang, Y. Zhang, H. Yang, and S. Zhang, “Large depth-of-field 3d shape measurement using an electrically tunable lens,” Opt. Express 27(21), 29697–29709 (2019).
    [Crossref]
  20. X. Hu, G. Wang, J.-S. Hyun, Y. Zhang, H. Yang, and S. Zhang, “Autofocusing method for high-resolution three-dimensional profilometry,” Opt. Lett. 45(2), 375–378 (2020).
    [Crossref]
  21. D. Malacara, Optical shop testing, vol. 59 (John Wiley & Sons, 2007).
  22. B. Li, N. Karpinsky, and S. Zhang, “Novel calibration method for structured-light system with an out-of-focus projector,” Appl. Opt. 53(16), 3415–3426 (2014).
    [Crossref]
  23. G. Rao, L. Song, S. Zhang, X. Yang, K. Chen, and J. Xu, “Depth-driven variable-frequency sinusoidal fringe pattern for accuracy improvement in fringe projection profilometry,” Opt. Express 26(16), 19986–20008 (2018).
    [Crossref]
  24. F. Crete, T. Dolmiere, P. Ladret, and M. Nicolas, “The blur effect: perception and estimation with a new no-reference perceptual blur metric,” in Human vision and electronic imaging XII, vol. 6492 (International Society for Optics and Photonics, 2007), p. 64920I.

2020 (2)

S. Zhang, “Rapid and automatic optimal exposure control for digital fringe projection technique,” Opt. Lasers Eng. 128, 106029 (2020).
[Crossref]

X. Hu, G. Wang, J.-S. Hyun, Y. Zhang, H. Yang, and S. Zhang, “Autofocusing method for high-resolution three-dimensional profilometry,” Opt. Lett. 45(2), 375–378 (2020).
[Crossref]

2019 (1)

2018 (4)

2017 (2)

B. Li and S. Zhang, “Microscopic structured light 3d profilometry: Binary defocusing technique vs. sinusoidal fringe projection,” Opt. Lasers Eng. 96, 117–123 (2017).
[Crossref]

O.-J. Kwon, S. Choi, D. Jang, and H.-S. Pang, “All-in-focus imaging using average filter-based relative focus measure,” Digit. Signal Process. 60, 200–210 (2017).
[Crossref]

2016 (1)

2015 (1)

2014 (4)

2013 (1)

2010 (1)

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recogn. 43(8), 2666–2680 (2010).
[Crossref]

2007 (1)

2004 (1)

1999 (1)

S. Murata and M. Kawamura, “Particle depth measurement based on depth-from-defocus,” Opt. Laser Technol. 31(1), 95–102 (1999).
[Crossref]

Achar, S.

S. Achar and S. G. Narasimhan, “Multi focus structured light for recovering scene shape and global illumination,” in European Conference on Computer Vision, (Springer, 2014), pp. 205–219.

Alonso, J. R.

Aslantas, V.

Asundi, A.

Ayubi, G. A.

Chen, K.

Chen, Q.

Chen, S.

Cheng, S.

Cheng, Y.-S. L.

Choi, S.

O.-J. Kwon, S. Choi, D. Jang, and H.-S. Pang, “All-in-focus imaging using average filter-based relative focus measure,” Digit. Signal Process. 60, 200–210 (2017).
[Crossref]

Chu, X.

Cong, P.

Y. Zhang, Z. Xiong, P. Cong, and F. Wu, “Robust depth sensing with adaptive structured light illumination,” J. Vis. Commun. Image Represent. 25(4), 649–658 (2014).
[Crossref]

Crete, F.

F. Crete, T. Dolmiere, P. Ladret, and M. Nicolas, “The blur effect: perception and estimation with a new no-reference perceptual blur metric,” in Human vision and electronic imaging XII, vol. 6492 (International Society for Optics and Photonics, 2007), p. 64920I.

Cuenca, R.

Dolmiere, T.

F. Crete, T. Dolmiere, P. Ladret, and M. Nicolas, “The blur effect: perception and estimation with a new no-reference perceptual blur metric,” in Human vision and electronic imaging XII, vol. 6492 (International Society for Optics and Photonics, 2007), p. 64920I.

Dong, L.

Fan, Z.

Feng, Y.

Fernandez, S.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recogn. 43(8), 2666–2680 (2010).
[Crossref]

Fernández, A.

Ferrari, J. A.

Guo, X.

Hu, X.

Hyun, J.-S.

Jabbour, J. M.

Jang, D.

O.-J. Kwon, S. Choi, D. Jang, and H.-S. Pang, “All-in-focus imaging using average filter-based relative focus measure,” Digit. Signal Process. 60, 200–210 (2017).
[Crossref]

Javidi, B.

Jia, W.

Jo, J. A.

Karpinsky, N.

Kawamura, M.

S. Murata and M. Kawamura, “Particle depth measurement based on depth-from-defocus,” Opt. Laser Technol. 31(1), 95–102 (1999).
[Crossref]

Kwon, O.-J.

O.-J. Kwon, S. Choi, D. Jang, and H.-S. Pang, “All-in-focus imaging using average filter-based relative focus measure,” Digit. Signal Process. 60, 200–210 (2017).
[Crossref]

Ladret, P.

F. Crete, T. Dolmiere, P. Ladret, and M. Nicolas, “The blur effect: perception and estimation with a new no-reference perceptual blur metric,” in Human vision and electronic imaging XII, vol. 6492 (International Society for Optics and Photonics, 2007), p. 64920I.

Le, V. N.

Li, B.

B. Li and S. Zhang, “Microscopic structured light 3d profilometry: Binary defocusing technique vs. sinusoidal fringe projection,” Opt. Lasers Eng. 96, 117–123 (2017).
[Crossref]

B. Li, N. Karpinsky, and S. Zhang, “Novel calibration method for structured-light system with an out-of-focus projector,” Appl. Opt. 53(16), 3415–3426 (2014).
[Crossref]

Li, H.

Liu, M.

Llado, X.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recogn. 43(8), 2666–2680 (2010).
[Crossref]

Maitland, K. C.

Malacara, D.

D. Malacara, Optical shop testing, vol. 59 (John Wiley & Sons, 2007).

Malik, B. H.

Murata, S.

S. Murata and M. Kawamura, “Particle depth measurement based on depth-from-defocus,” Opt. Laser Technol. 31(1), 95–102 (1999).
[Crossref]

Narasimhan, S. G.

S. Achar and S. G. Narasimhan, “Multi focus structured light for recovering scene shape and global illumination,” in European Conference on Computer Vision, (Springer, 2014), pp. 205–219.

Nayar, S.

L. Zhang and S. Nayar, “Projection defocus analysis for scene capture and image display,” in ACM SIGGRAPH 2006 Papers, (2006), pp. 907–915.

Nicolas, M.

F. Crete, T. Dolmiere, P. Ladret, and M. Nicolas, “The blur effect: perception and estimation with a new no-reference perceptual blur metric,” in Human vision and electronic imaging XII, vol. 6492 (International Society for Optics and Photonics, 2007), p. 64920I.

Ojeda-Castañeda, J.

Olsovsky, C.

Pan, F.

Pang, H.-S.

O.-J. Kwon, S. Choi, D. Jang, and H.-S. Pang, “All-in-focus imaging using average filter-based relative focus measure,” Digit. Signal Process. 60, 200–210 (2017).
[Crossref]

Peng, J.

Pham, D.

Pribanic, T.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recogn. 43(8), 2666–2680 (2010).
[Crossref]

Qu, W.

Rao, G.

Salvi, J.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recogn. 43(8), 2666–2680 (2010).
[Crossref]

Sauceda, A.

Shen, X.

Song, L.

Wang, G.

Wright, J. M.

Wu, F.

Y. Zhang, Z. Xiong, P. Cong, and F. Wu, “Robust depth sensing with adaptive structured light illumination,” J. Vis. Commun. Image Represent. 25(4), 649–658 (2014).
[Crossref]

Wu, Y.

Xie, X.

Xiong, Z.

Y. Zhang, Z. Xiong, P. Cong, and F. Wu, “Robust depth sensing with adaptive structured light illumination,” J. Vis. Commun. Image Represent. 25(4), 649–658 (2014).
[Crossref]

Xu, J.

Yang, H.

Yang, X.

Zhang, L.

L. Zhang and S. Nayar, “Projection defocus analysis for scene capture and image display,” in ACM SIGGRAPH 2006 Papers, (2006), pp. 907–915.

Zhang, S.

Zhang, Y.

Zhao, Y.

Zuo, C.

Appl. Opt. (3)

Biomed. Opt. Express (1)

Digit. Signal Process. (1)

O.-J. Kwon, S. Choi, D. Jang, and H.-S. Pang, “All-in-focus imaging using average filter-based relative focus measure,” Digit. Signal Process. 60, 200–210 (2017).
[Crossref]

J. Vis. Commun. Image Represent. (1)

Y. Zhang, Z. Xiong, P. Cong, and F. Wu, “Robust depth sensing with adaptive structured light illumination,” J. Vis. Commun. Image Represent. 25(4), 649–658 (2014).
[Crossref]

Opt. Express (5)

Opt. Laser Technol. (1)

S. Murata and M. Kawamura, “Particle depth measurement based on depth-from-defocus,” Opt. Laser Technol. 31(1), 95–102 (1999).
[Crossref]

Opt. Lasers Eng. (3)

S. Zhang, “Rapid and automatic optimal exposure control for digital fringe projection technique,” Opt. Lasers Eng. 128, 106029 (2020).
[Crossref]

S. Zhang, “High-speed 3d shape measurement with structured light methods: A review,” Opt. Lasers Eng. 106, 119–131 (2018).
[Crossref]

B. Li and S. Zhang, “Microscopic structured light 3d profilometry: Binary defocusing technique vs. sinusoidal fringe projection,” Opt. Lasers Eng. 96, 117–123 (2017).
[Crossref]

Opt. Lett. (4)

Pattern Recogn. (1)

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recogn. 43(8), 2666–2680 (2010).
[Crossref]

Other (4)

F. Crete, T. Dolmiere, P. Ladret, and M. Nicolas, “The blur effect: perception and estimation with a new no-reference perceptual blur metric,” in Human vision and electronic imaging XII, vol. 6492 (International Society for Optics and Photonics, 2007), p. 64920I.

D. Malacara, Optical shop testing, vol. 59 (John Wiley & Sons, 2007).

S. Achar and S. G. Narasimhan, “Multi focus structured light for recovering scene shape and global illumination,” in European Conference on Computer Vision, (Springer, 2014), pp. 205–219.

L. Zhang and S. Nayar, “Projection defocus analysis for scene capture and image display,” in ACM SIGGRAPH 2006 Papers, (2006), pp. 907–915.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Schematic diagram of proposed calibration.
Fig. 2.
Fig. 2. DOF limitation of the projector when the camera focus can be adjusted. Square binary pattern is projected and captured for a white board at a distance approximately (a) 450 mm, (b) 650 mm, and (c) 850 mm away from the projector.
Fig. 3.
Fig. 3. DOF limitation of the camera. Captured image for a white board at a distance approximately (a) 450 mm, (b) 650 mm, and (c) 850 mm away from the camera.
Fig. 4.
Fig. 4. The relationship between focal plane and the current. Dots are the measured data points and the blue smooth curve is the fitted polynomial function.
Fig. 5.
Fig. 5. Framework of our autofocusing method.
Fig. 6.
Fig. 6. Measurement results of spheres at different locations. The first row shows results of the sphere with a diameter of 40 mm positioned at $\mathbf {Z} =$ 450 mm, the second row shows results of the sphere with a diameter of 80 mm positioned at $\mathbf {Z} =$ 650 mm, and the third row shows results of the sphere with a diameter of 200 mm positioned at $\mathbf {Z} =$ 850 mm. The first column shows 3D results when the driving current is $\mathbf {i}_{1}$ , the second column shows the corresponding error maps for those results shown in in the first column, the third column shows 3D results when the driving current is $\mathbf {i}_{2}$ , the fourth column shows the corresponding error maps for those results shown in in the third column, the fifth column shows 3D results when the driving current is $\mathbf {i}_{3}$ , the sixth column shows the corresponding error maps for those results shown in in the fifth column.
Fig. 7.
Fig. 7. One cross section of each of the measured sphere under different defocusing levels.(a) The first sphere $\mathbf {i}_{1}$ , $\mathbf {i}_{2}$ , $\mathbf {i}_{3}$ ; (b) The second sphere $\mathbf {i}_{1}$ , $\mathbf {i}_{2}$ , $\mathbf {i}_{3}$ ; and (c) The third sphere $\mathbf {i}_{1}$ , $\mathbf {i}_{2}$ , $\mathbf {i}_{3}$ .
Fig. 8.
Fig. 8. Experimental result of automatically focus a smaller statue within the working range. (a) Photograph of the object with initial setting; (b) 3D result with initial setting -80 mA; (c) photograph of the object with focused setting -38.69 mA; (d) 3D result with focused setting; (e)-(h) close-up views of the results shown in (a)-(d).
Fig. 9.
Fig. 9. Experimental result of automatically focus a larger statue within the working range. (a) Photograph of the object with initial setting; (b) 3D result with initial setting -90 mA; (c) photograph of the object with focused setting -46.09 mA; (d) 3D result with focused setting; (e)-(h) close-up views of the results shown in (a)-(d).

Tables (2)

Tables Icon

Table 1. The input current value and the corresponding focal plane

Tables Icon

Table 2. RMS errors for results shown in Fig. 6. (Unit: mm)

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

I n ( x , y ) = A ( x , y ) + B ( x , y ) cos [ φ ( x , y ) + 2 n π / N ] ,
φ ( x , y ) = tan 1 [ n = 1 N I n sin ( 2 n π / N ) n = 1 N I n cos ( 2 n π / N ) ] .
Φ ( x , y ) = φ ( x , y ) + k ( x , y ) × 2 π ,
s l [ u l , v l , 1 ] t = P l [ x w , y w , z w ] t ,
s r [ u r , v r , 1 ] t = P r [ x w , y w , z w ] t ,
u l = Φ V l T V l 2 π , v l = Φ H l T H l 2 π
u r = Φ V r T V r 2 π , v r = Φ H r T H r 2 π
s l [ Φ V l T V l 2 π , Φ H l T H l 2 π , 1 ] t = P l [ x w , y w , z w ] t ,
s r [ Φ V r T V r 2 π , Φ H r T H r 2 π , 1 ] t = P r [ x w , y w , z w ] t .
i = a 0 + a 1 Z + a 2 Z 2 ,

Metrics