Abstract

We propose a label enhanced and patch based deep learning phase retrieval approach which can achieve fast and accurate phase retrieval using only several fringe patterns as training dataset. To the best of our knowledge, it is the first time that the advantages of the label enhancement and patch strategy for deep learning based phase retrieval are demonstrated in fringe projection. In the proposed method, the enhanced labeled data in training dataset is designed to learn the mapping between the input fringe pattern and the output enhanced fringe part of the deep neural network (DNN). Moreover, the training data is cropped into small overlapped patches to expand the training samples for the DNN. The performance of the proposed approach is verified by experimental projection fringe patterns with applications in dynamic fringe projection 3D measurement.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Real-time 3D shape measurement using 3LCD projection and deep machine learning

Hieu Nguyen, Nicole Dunne, Hui Li, Yuzeng Wang, and Zhaoyang Wang
Appl. Opt. 58(26) 7100-7109 (2019)

Cryptanalysis of random-phase-encoding-based optical cryptosystem via deep learning

Han Hai, Shuixin Pan, Meihua Liao, Dajiang Lu, Wenqi He, and Xiang Peng
Opt. Express 27(15) 21204-21213 (2019)

Batch denoising of ESPI fringe patterns based on convolutional neural network

Fugui Hao, Chen Tang, Min Xu, and Zhenkun Lei
Appl. Opt. 58(13) 3338-3346 (2019)

References

  • View by:
  • |
  • |
  • |

  1. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
    [Crossref]
  2. L. Song, X. Li, Y. Yang, X. Zhu, Q. Guo, and H. Liu, “Structured-Light based 3D reconstruction system for cultural relic packaging,” Sensors 18(9), 2981 (2018).
    [Crossref]
  3. B. Li, Y. An, D. Cappelleri, J. Xu, and S. Zhang, “High-accuracy, high-speed 3D structured light imaging techniques and potential applications to intelligent robotics,” Int. J. Intell. Robot. Applic. 1(1), 86–103 (2017).
    [Crossref]
  4. S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: A review,” Opt. Lasers Eng. 107, 28–37 (2018).
    [Crossref]
  5. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: a review,” Opt. Lasers Eng. 109, 23–59 (2018).
    [Crossref]
  6. S. Zhang, “High-speed 3D shape measurement with structured light methods: a review,” Opt. Laser Eng. 106, 119–131 (2018).
    [Crossref]
  7. X. Zhu, L. Song, H. Wang, and Q. Guo, “Assessment of fringe pattern decomposition with a cross-correlation index for phase retrieval in fringe projection 3D measurements,” Sensors 18(10), 3578 (2018).
    [Crossref]
  8. M. Takeda and K. Mutoh, “Fourier transform profilometry for the automatic measurement of 3-D object shapes,” Appl. Opt. 22(24), 3977–3982 (1983).
    [Crossref]
  9. H. Lei, K. Qian, P. Bing, and K. Anand, “Comparison of Fourier transform, windowed Fourier transform, and Wavelet transform methods for phase extraction from a single fringe pattern in fringe projection profilometry,” Opt. Lasers Eng. 48(2), 141–148 (2010).
    [Crossref]
  10. A. Abid, M. Gdeisat, D. Burton, M. Lalor, and F. Lilley, “Spatial fringe pattern analysis using the two-dimensional continuous Wavelet transform employing a cost function,” Appl. Opt. 46(24), 6120–6126 (2007).
    [Crossref]
  11. X. Zhou, A. G. Podoleanu, Z. Yang, T. Yang, and H. Zhao, “Morphological operation-based bi-dimensional empirical mode decomposition for automatic background removal of fringe patterns,” Opt. Express 20(22), 24247–24262 (2012).
    [Crossref]
  12. X. Zhu, Z. Chen, and C. Tang, “Variational image decomposition for automatic background and noise removal of fringe patterns,” Opt. Lett. 38(3), 275–277 (2013).
    [Crossref]
  13. X. Zhu, C. Tang, B. Li, C. Sun, and L. Wang, “Phase retrieval from single frame projection fringe pattern with variational image decomposition,” Opt. Lasers Eng. 59, 25–33 (2014).
    [Crossref]
  14. B. Li, C. Tang, X. Zhu, Y. Su, and W. Xu, “Shearlet transform for phase extraction in fringe projection profilometry with edges discontinuity,” Opt. Lasers Eng. 78, 91–98 (2016).
    [Crossref]
  15. B. Li, C. Tang, X. Zhu, X. Chen, Y. Su, and Y. Cai, “A 3D shape retrieval method for orthogonal fringe projection based on a combination of variational image decomposition and variational mode decomposition,” Opt. Lasers Eng. 86, 345–355 (2016).
    [Crossref]
  16. C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
    [Crossref]
  17. Y. Rivenson, Y. Zhang, H. Günaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
    [Crossref]
  18. M. T. McCann, E. Froustey, M. Unser, M. Unser, M. Unser, and K. Jin, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26(9), 4509–4522 (2017).
    [Crossref]
  19. H. Wang, M. Lyu, and G. Situ, “eHoloNet: a learning-based end-to-end approach for in-line digital holographic reconstruction,” Opt. Express 26(18), 22603–22614 (2018).
    [Crossref]
  20. S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Photon. 1(02), 1 (2019).
    [Crossref]
  21. S. Feng, C. Zuo, W. Yin, G. Gu, and Q. Chen, “Micro deep learning profilometry for high-speed 3D surface imaging,” Opt. Lasers Eng. 121, 416–427 (2019).
    [Crossref]
  22. J. Zhang, X. Tian, J. Shao, H. Luo, and R. Liang, “Phase unwrapping in optical metrology via denoised and convolutional segmentation networks,” Opt. Express 27(10), 14903–14912 (2019).
    [Crossref]
  23. G. E. Spoorthi, S. Gorthi, and R. Gorthi, “PhaseNet: A deep convolutional neural network for two-dimensional phase unwrapping,” IEEE Signal Process. Let. 26(1), 54–58 (2019).
    [Crossref]
  24. K. Wang, Y. Li, K. Qian, J. Di, and J. Zhao, “One-step robust deep learning phase unwrapping,” Opt. Express 27(10), 15100–15115 (2019).
    [Crossref]
  25. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising,” IEEE Trans. Imag. Process. 26(7), 3142–3155 (2017).
    [Crossref]
  26. D. Labate, L. Mantovani, and P. S. Negi, “Shearlet smoothness spaces,” J. Fourier. Anal. Appl. 19(3), 577–611 (2013).
    [Crossref]

2019 (5)

S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Photon. 1(02), 1 (2019).
[Crossref]

S. Feng, C. Zuo, W. Yin, G. Gu, and Q. Chen, “Micro deep learning profilometry for high-speed 3D surface imaging,” Opt. Lasers Eng. 121, 416–427 (2019).
[Crossref]

G. E. Spoorthi, S. Gorthi, and R. Gorthi, “PhaseNet: A deep convolutional neural network for two-dimensional phase unwrapping,” IEEE Signal Process. Let. 26(1), 54–58 (2019).
[Crossref]

J. Zhang, X. Tian, J. Shao, H. Luo, and R. Liang, “Phase unwrapping in optical metrology via denoised and convolutional segmentation networks,” Opt. Express 27(10), 14903–14912 (2019).
[Crossref]

K. Wang, Y. Li, K. Qian, J. Di, and J. Zhao, “One-step robust deep learning phase unwrapping,” Opt. Express 27(10), 15100–15115 (2019).
[Crossref]

2018 (7)

H. Wang, M. Lyu, and G. Situ, “eHoloNet: a learning-based end-to-end approach for in-line digital holographic reconstruction,” Opt. Express 26(18), 22603–22614 (2018).
[Crossref]

Y. Rivenson, Y. Zhang, H. Günaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
[Crossref]

S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: A review,” Opt. Lasers Eng. 107, 28–37 (2018).
[Crossref]

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: a review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

S. Zhang, “High-speed 3D shape measurement with structured light methods: a review,” Opt. Laser Eng. 106, 119–131 (2018).
[Crossref]

X. Zhu, L. Song, H. Wang, and Q. Guo, “Assessment of fringe pattern decomposition with a cross-correlation index for phase retrieval in fringe projection 3D measurements,” Sensors 18(10), 3578 (2018).
[Crossref]

L. Song, X. Li, Y. Yang, X. Zhu, Q. Guo, and H. Liu, “Structured-Light based 3D reconstruction system for cultural relic packaging,” Sensors 18(9), 2981 (2018).
[Crossref]

2017 (3)

B. Li, Y. An, D. Cappelleri, J. Xu, and S. Zhang, “High-accuracy, high-speed 3D structured light imaging techniques and potential applications to intelligent robotics,” Int. J. Intell. Robot. Applic. 1(1), 86–103 (2017).
[Crossref]

M. T. McCann, E. Froustey, M. Unser, M. Unser, M. Unser, and K. Jin, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26(9), 4509–4522 (2017).
[Crossref]

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising,” IEEE Trans. Imag. Process. 26(7), 3142–3155 (2017).
[Crossref]

2016 (3)

B. Li, C. Tang, X. Zhu, Y. Su, and W. Xu, “Shearlet transform for phase extraction in fringe projection profilometry with edges discontinuity,” Opt. Lasers Eng. 78, 91–98 (2016).
[Crossref]

B. Li, C. Tang, X. Zhu, X. Chen, Y. Su, and Y. Cai, “A 3D shape retrieval method for orthogonal fringe projection based on a combination of variational image decomposition and variational mode decomposition,” Opt. Lasers Eng. 86, 345–355 (2016).
[Crossref]

C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
[Crossref]

2014 (1)

X. Zhu, C. Tang, B. Li, C. Sun, and L. Wang, “Phase retrieval from single frame projection fringe pattern with variational image decomposition,” Opt. Lasers Eng. 59, 25–33 (2014).
[Crossref]

2013 (2)

D. Labate, L. Mantovani, and P. S. Negi, “Shearlet smoothness spaces,” J. Fourier. Anal. Appl. 19(3), 577–611 (2013).
[Crossref]

X. Zhu, Z. Chen, and C. Tang, “Variational image decomposition for automatic background and noise removal of fringe patterns,” Opt. Lett. 38(3), 275–277 (2013).
[Crossref]

2012 (1)

2010 (2)

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

H. Lei, K. Qian, P. Bing, and K. Anand, “Comparison of Fourier transform, windowed Fourier transform, and Wavelet transform methods for phase extraction from a single fringe pattern in fringe projection profilometry,” Opt. Lasers Eng. 48(2), 141–148 (2010).
[Crossref]

2007 (1)

1983 (1)

Abid, A.

An, Y.

B. Li, Y. An, D. Cappelleri, J. Xu, and S. Zhang, “High-accuracy, high-speed 3D structured light imaging techniques and potential applications to intelligent robotics,” Int. J. Intell. Robot. Applic. 1(1), 86–103 (2017).
[Crossref]

Anand, K.

H. Lei, K. Qian, P. Bing, and K. Anand, “Comparison of Fourier transform, windowed Fourier transform, and Wavelet transform methods for phase extraction from a single fringe pattern in fringe projection profilometry,” Opt. Lasers Eng. 48(2), 141–148 (2010).
[Crossref]

Bing, P.

H. Lei, K. Qian, P. Bing, and K. Anand, “Comparison of Fourier transform, windowed Fourier transform, and Wavelet transform methods for phase extraction from a single fringe pattern in fringe projection profilometry,” Opt. Lasers Eng. 48(2), 141–148 (2010).
[Crossref]

Burton, D.

Cai, Y.

B. Li, C. Tang, X. Zhu, X. Chen, Y. Su, and Y. Cai, “A 3D shape retrieval method for orthogonal fringe projection based on a combination of variational image decomposition and variational mode decomposition,” Opt. Lasers Eng. 86, 345–355 (2016).
[Crossref]

Cappelleri, D.

B. Li, Y. An, D. Cappelleri, J. Xu, and S. Zhang, “High-accuracy, high-speed 3D structured light imaging techniques and potential applications to intelligent robotics,” Int. J. Intell. Robot. Applic. 1(1), 86–103 (2017).
[Crossref]

Chen, Q.

S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Photon. 1(02), 1 (2019).
[Crossref]

S. Feng, C. Zuo, W. Yin, G. Gu, and Q. Chen, “Micro deep learning profilometry for high-speed 3D surface imaging,” Opt. Lasers Eng. 121, 416–427 (2019).
[Crossref]

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: a review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

Chen, X.

B. Li, C. Tang, X. Zhu, X. Chen, Y. Su, and Y. Cai, “A 3D shape retrieval method for orthogonal fringe projection based on a combination of variational image decomposition and variational mode decomposition,” Opt. Lasers Eng. 86, 345–355 (2016).
[Crossref]

Chen, Y.

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising,” IEEE Trans. Imag. Process. 26(7), 3142–3155 (2017).
[Crossref]

Chen, Z.

Di, J.

Dong, C.

C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
[Crossref]

Feng, S.

S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Photon. 1(02), 1 (2019).
[Crossref]

S. Feng, C. Zuo, W. Yin, G. Gu, and Q. Chen, “Micro deep learning profilometry for high-speed 3D surface imaging,” Opt. Lasers Eng. 121, 416–427 (2019).
[Crossref]

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: a review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

Froustey, E.

M. T. McCann, E. Froustey, M. Unser, M. Unser, M. Unser, and K. Jin, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26(9), 4509–4522 (2017).
[Crossref]

Gdeisat, M.

Gorthi, R.

G. E. Spoorthi, S. Gorthi, and R. Gorthi, “PhaseNet: A deep convolutional neural network for two-dimensional phase unwrapping,” IEEE Signal Process. Let. 26(1), 54–58 (2019).
[Crossref]

Gorthi, S.

G. E. Spoorthi, S. Gorthi, and R. Gorthi, “PhaseNet: A deep convolutional neural network for two-dimensional phase unwrapping,” IEEE Signal Process. Let. 26(1), 54–58 (2019).
[Crossref]

Gorthi, S. S.

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

Gu, G.

S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Photon. 1(02), 1 (2019).
[Crossref]

S. Feng, C. Zuo, W. Yin, G. Gu, and Q. Chen, “Micro deep learning profilometry for high-speed 3D surface imaging,” Opt. Lasers Eng. 121, 416–427 (2019).
[Crossref]

Günaydin, H.

Y. Rivenson, Y. Zhang, H. Günaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
[Crossref]

Guo, Q.

L. Song, X. Li, Y. Yang, X. Zhu, Q. Guo, and H. Liu, “Structured-Light based 3D reconstruction system for cultural relic packaging,” Sensors 18(9), 2981 (2018).
[Crossref]

X. Zhu, L. Song, H. Wang, and Q. Guo, “Assessment of fringe pattern decomposition with a cross-correlation index for phase retrieval in fringe projection 3D measurements,” Sensors 18(10), 3578 (2018).
[Crossref]

He, K.

C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
[Crossref]

Hu, Y.

S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Photon. 1(02), 1 (2019).
[Crossref]

Huang, L.

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: a review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

Jin, K.

M. T. McCann, E. Froustey, M. Unser, M. Unser, M. Unser, and K. Jin, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26(9), 4509–4522 (2017).
[Crossref]

Labate, D.

D. Labate, L. Mantovani, and P. S. Negi, “Shearlet smoothness spaces,” J. Fourier. Anal. Appl. 19(3), 577–611 (2013).
[Crossref]

Lalor, M.

Lei, H.

H. Lei, K. Qian, P. Bing, and K. Anand, “Comparison of Fourier transform, windowed Fourier transform, and Wavelet transform methods for phase extraction from a single fringe pattern in fringe projection profilometry,” Opt. Lasers Eng. 48(2), 141–148 (2010).
[Crossref]

Li, B.

B. Li, Y. An, D. Cappelleri, J. Xu, and S. Zhang, “High-accuracy, high-speed 3D structured light imaging techniques and potential applications to intelligent robotics,” Int. J. Intell. Robot. Applic. 1(1), 86–103 (2017).
[Crossref]

B. Li, C. Tang, X. Zhu, Y. Su, and W. Xu, “Shearlet transform for phase extraction in fringe projection profilometry with edges discontinuity,” Opt. Lasers Eng. 78, 91–98 (2016).
[Crossref]

B. Li, C. Tang, X. Zhu, X. Chen, Y. Su, and Y. Cai, “A 3D shape retrieval method for orthogonal fringe projection based on a combination of variational image decomposition and variational mode decomposition,” Opt. Lasers Eng. 86, 345–355 (2016).
[Crossref]

X. Zhu, C. Tang, B. Li, C. Sun, and L. Wang, “Phase retrieval from single frame projection fringe pattern with variational image decomposition,” Opt. Lasers Eng. 59, 25–33 (2014).
[Crossref]

Li, X.

L. Song, X. Li, Y. Yang, X. Zhu, Q. Guo, and H. Liu, “Structured-Light based 3D reconstruction system for cultural relic packaging,” Sensors 18(9), 2981 (2018).
[Crossref]

Li, Y.

Liang, R.

Lilley, F.

Liu, H.

L. Song, X. Li, Y. Yang, X. Zhu, Q. Guo, and H. Liu, “Structured-Light based 3D reconstruction system for cultural relic packaging,” Sensors 18(9), 2981 (2018).
[Crossref]

Loy, C. C.

C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
[Crossref]

Luo, H.

Lyu, M.

Mantovani, L.

D. Labate, L. Mantovani, and P. S. Negi, “Shearlet smoothness spaces,” J. Fourier. Anal. Appl. 19(3), 577–611 (2013).
[Crossref]

McCann, M. T.

M. T. McCann, E. Froustey, M. Unser, M. Unser, M. Unser, and K. Jin, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26(9), 4509–4522 (2017).
[Crossref]

Meng, D.

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising,” IEEE Trans. Imag. Process. 26(7), 3142–3155 (2017).
[Crossref]

Mutoh, K.

Negi, P. S.

D. Labate, L. Mantovani, and P. S. Negi, “Shearlet smoothness spaces,” J. Fourier. Anal. Appl. 19(3), 577–611 (2013).
[Crossref]

Ozcan, A.

Y. Rivenson, Y. Zhang, H. Günaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
[Crossref]

Podoleanu, A. G.

Qian, K.

K. Wang, Y. Li, K. Qian, J. Di, and J. Zhao, “One-step robust deep learning phase unwrapping,” Opt. Express 27(10), 15100–15115 (2019).
[Crossref]

H. Lei, K. Qian, P. Bing, and K. Anand, “Comparison of Fourier transform, windowed Fourier transform, and Wavelet transform methods for phase extraction from a single fringe pattern in fringe projection profilometry,” Opt. Lasers Eng. 48(2), 141–148 (2010).
[Crossref]

Rastogi, P.

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

Rivenson, Y.

Y. Rivenson, Y. Zhang, H. Günaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
[Crossref]

Shao, J.

Situ, G.

Song, L.

L. Song, X. Li, Y. Yang, X. Zhu, Q. Guo, and H. Liu, “Structured-Light based 3D reconstruction system for cultural relic packaging,” Sensors 18(9), 2981 (2018).
[Crossref]

X. Zhu, L. Song, H. Wang, and Q. Guo, “Assessment of fringe pattern decomposition with a cross-correlation index for phase retrieval in fringe projection 3D measurements,” Sensors 18(10), 3578 (2018).
[Crossref]

Spoorthi, G. E.

G. E. Spoorthi, S. Gorthi, and R. Gorthi, “PhaseNet: A deep convolutional neural network for two-dimensional phase unwrapping,” IEEE Signal Process. Let. 26(1), 54–58 (2019).
[Crossref]

Su, Y.

B. Li, C. Tang, X. Zhu, X. Chen, Y. Su, and Y. Cai, “A 3D shape retrieval method for orthogonal fringe projection based on a combination of variational image decomposition and variational mode decomposition,” Opt. Lasers Eng. 86, 345–355 (2016).
[Crossref]

B. Li, C. Tang, X. Zhu, Y. Su, and W. Xu, “Shearlet transform for phase extraction in fringe projection profilometry with edges discontinuity,” Opt. Lasers Eng. 78, 91–98 (2016).
[Crossref]

Sun, C.

X. Zhu, C. Tang, B. Li, C. Sun, and L. Wang, “Phase retrieval from single frame projection fringe pattern with variational image decomposition,” Opt. Lasers Eng. 59, 25–33 (2014).
[Crossref]

Takeda, M.

Tang, C.

B. Li, C. Tang, X. Zhu, Y. Su, and W. Xu, “Shearlet transform for phase extraction in fringe projection profilometry with edges discontinuity,” Opt. Lasers Eng. 78, 91–98 (2016).
[Crossref]

B. Li, C. Tang, X. Zhu, X. Chen, Y. Su, and Y. Cai, “A 3D shape retrieval method for orthogonal fringe projection based on a combination of variational image decomposition and variational mode decomposition,” Opt. Lasers Eng. 86, 345–355 (2016).
[Crossref]

X. Zhu, C. Tang, B. Li, C. Sun, and L. Wang, “Phase retrieval from single frame projection fringe pattern with variational image decomposition,” Opt. Lasers Eng. 59, 25–33 (2014).
[Crossref]

X. Zhu, Z. Chen, and C. Tang, “Variational image decomposition for automatic background and noise removal of fringe patterns,” Opt. Lett. 38(3), 275–277 (2013).
[Crossref]

Tang, X.

C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
[Crossref]

Tao, T.

S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Photon. 1(02), 1 (2019).
[Crossref]

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: a review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

Teng, D.

Y. Rivenson, Y. Zhang, H. Günaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
[Crossref]

Tian, X.

Unser, M.

M. T. McCann, E. Froustey, M. Unser, M. Unser, M. Unser, and K. Jin, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26(9), 4509–4522 (2017).
[Crossref]

M. T. McCann, E. Froustey, M. Unser, M. Unser, M. Unser, and K. Jin, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26(9), 4509–4522 (2017).
[Crossref]

M. T. McCann, E. Froustey, M. Unser, M. Unser, M. Unser, and K. Jin, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26(9), 4509–4522 (2017).
[Crossref]

Wang, H.

X. Zhu, L. Song, H. Wang, and Q. Guo, “Assessment of fringe pattern decomposition with a cross-correlation index for phase retrieval in fringe projection 3D measurements,” Sensors 18(10), 3578 (2018).
[Crossref]

H. Wang, M. Lyu, and G. Situ, “eHoloNet: a learning-based end-to-end approach for in-line digital holographic reconstruction,” Opt. Express 26(18), 22603–22614 (2018).
[Crossref]

Wang, K.

Wang, L.

X. Zhu, C. Tang, B. Li, C. Sun, and L. Wang, “Phase retrieval from single frame projection fringe pattern with variational image decomposition,” Opt. Lasers Eng. 59, 25–33 (2014).
[Crossref]

Xu, J.

B. Li, Y. An, D. Cappelleri, J. Xu, and S. Zhang, “High-accuracy, high-speed 3D structured light imaging techniques and potential applications to intelligent robotics,” Int. J. Intell. Robot. Applic. 1(1), 86–103 (2017).
[Crossref]

Xu, W.

B. Li, C. Tang, X. Zhu, Y. Su, and W. Xu, “Shearlet transform for phase extraction in fringe projection profilometry with edges discontinuity,” Opt. Lasers Eng. 78, 91–98 (2016).
[Crossref]

Yang, T.

Yang, Y.

L. Song, X. Li, Y. Yang, X. Zhu, Q. Guo, and H. Liu, “Structured-Light based 3D reconstruction system for cultural relic packaging,” Sensors 18(9), 2981 (2018).
[Crossref]

Yang, Z.

Yin, W.

S. Feng, C. Zuo, W. Yin, G. Gu, and Q. Chen, “Micro deep learning profilometry for high-speed 3D surface imaging,” Opt. Lasers Eng. 121, 416–427 (2019).
[Crossref]

S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Photon. 1(02), 1 (2019).
[Crossref]

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: a review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

Zhang, J.

Zhang, K.

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising,” IEEE Trans. Imag. Process. 26(7), 3142–3155 (2017).
[Crossref]

Zhang, L.

S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Photon. 1(02), 1 (2019).
[Crossref]

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising,” IEEE Trans. Imag. Process. 26(7), 3142–3155 (2017).
[Crossref]

Zhang, S.

S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: A review,” Opt. Lasers Eng. 107, 28–37 (2018).
[Crossref]

S. Zhang, “High-speed 3D shape measurement with structured light methods: a review,” Opt. Laser Eng. 106, 119–131 (2018).
[Crossref]

B. Li, Y. An, D. Cappelleri, J. Xu, and S. Zhang, “High-accuracy, high-speed 3D structured light imaging techniques and potential applications to intelligent robotics,” Int. J. Intell. Robot. Applic. 1(1), 86–103 (2017).
[Crossref]

Zhang, Y.

Y. Rivenson, Y. Zhang, H. Günaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
[Crossref]

Zhao, H.

Zhao, J.

Zhou, X.

Zhu, X.

L. Song, X. Li, Y. Yang, X. Zhu, Q. Guo, and H. Liu, “Structured-Light based 3D reconstruction system for cultural relic packaging,” Sensors 18(9), 2981 (2018).
[Crossref]

X. Zhu, L. Song, H. Wang, and Q. Guo, “Assessment of fringe pattern decomposition with a cross-correlation index for phase retrieval in fringe projection 3D measurements,” Sensors 18(10), 3578 (2018).
[Crossref]

B. Li, C. Tang, X. Zhu, Y. Su, and W. Xu, “Shearlet transform for phase extraction in fringe projection profilometry with edges discontinuity,” Opt. Lasers Eng. 78, 91–98 (2016).
[Crossref]

B. Li, C. Tang, X. Zhu, X. Chen, Y. Su, and Y. Cai, “A 3D shape retrieval method for orthogonal fringe projection based on a combination of variational image decomposition and variational mode decomposition,” Opt. Lasers Eng. 86, 345–355 (2016).
[Crossref]

X. Zhu, C. Tang, B. Li, C. Sun, and L. Wang, “Phase retrieval from single frame projection fringe pattern with variational image decomposition,” Opt. Lasers Eng. 59, 25–33 (2014).
[Crossref]

X. Zhu, Z. Chen, and C. Tang, “Variational image decomposition for automatic background and noise removal of fringe patterns,” Opt. Lett. 38(3), 275–277 (2013).
[Crossref]

Zuo, C.

S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Photon. 1(02), 1 (2019).
[Crossref]

S. Feng, C. Zuo, W. Yin, G. Gu, and Q. Chen, “Micro deep learning profilometry for high-speed 3D surface imaging,” Opt. Lasers Eng. 121, 416–427 (2019).
[Crossref]

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: a review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

Zuo, W.

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising,” IEEE Trans. Imag. Process. 26(7), 3142–3155 (2017).
[Crossref]

Adv. Photon. (1)

S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Photon. 1(02), 1 (2019).
[Crossref]

Appl. Opt. (2)

IEEE Signal Process. Let. (1)

G. E. Spoorthi, S. Gorthi, and R. Gorthi, “PhaseNet: A deep convolutional neural network for two-dimensional phase unwrapping,” IEEE Signal Process. Let. 26(1), 54–58 (2019).
[Crossref]

IEEE Trans. Imag. Process. (1)

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising,” IEEE Trans. Imag. Process. 26(7), 3142–3155 (2017).
[Crossref]

IEEE Trans. Image Process. (1)

M. T. McCann, E. Froustey, M. Unser, M. Unser, M. Unser, and K. Jin, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26(9), 4509–4522 (2017).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
[Crossref]

Int. J. Intell. Robot. Applic. (1)

B. Li, Y. An, D. Cappelleri, J. Xu, and S. Zhang, “High-accuracy, high-speed 3D structured light imaging techniques and potential applications to intelligent robotics,” Int. J. Intell. Robot. Applic. 1(1), 86–103 (2017).
[Crossref]

J. Fourier. Anal. Appl. (1)

D. Labate, L. Mantovani, and P. S. Negi, “Shearlet smoothness spaces,” J. Fourier. Anal. Appl. 19(3), 577–611 (2013).
[Crossref]

Light: Sci. Appl. (1)

Y. Rivenson, Y. Zhang, H. Günaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
[Crossref]

Opt. Express (4)

Opt. Laser Eng. (1)

S. Zhang, “High-speed 3D shape measurement with structured light methods: a review,” Opt. Laser Eng. 106, 119–131 (2018).
[Crossref]

Opt. Lasers Eng. (8)

S. Feng, C. Zuo, W. Yin, G. Gu, and Q. Chen, “Micro deep learning profilometry for high-speed 3D surface imaging,” Opt. Lasers Eng. 121, 416–427 (2019).
[Crossref]

S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: A review,” Opt. Lasers Eng. 107, 28–37 (2018).
[Crossref]

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: a review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

H. Lei, K. Qian, P. Bing, and K. Anand, “Comparison of Fourier transform, windowed Fourier transform, and Wavelet transform methods for phase extraction from a single fringe pattern in fringe projection profilometry,” Opt. Lasers Eng. 48(2), 141–148 (2010).
[Crossref]

X. Zhu, C. Tang, B. Li, C. Sun, and L. Wang, “Phase retrieval from single frame projection fringe pattern with variational image decomposition,” Opt. Lasers Eng. 59, 25–33 (2014).
[Crossref]

B. Li, C. Tang, X. Zhu, Y. Su, and W. Xu, “Shearlet transform for phase extraction in fringe projection profilometry with edges discontinuity,” Opt. Lasers Eng. 78, 91–98 (2016).
[Crossref]

B. Li, C. Tang, X. Zhu, X. Chen, Y. Su, and Y. Cai, “A 3D shape retrieval method for orthogonal fringe projection based on a combination of variational image decomposition and variational mode decomposition,” Opt. Lasers Eng. 86, 345–355 (2016).
[Crossref]

Opt. Lett. (1)

Sensors (2)

L. Song, X. Li, Y. Yang, X. Zhu, Q. Guo, and H. Liu, “Structured-Light based 3D reconstruction system for cultural relic packaging,” Sensors 18(9), 2981 (2018).
[Crossref]

X. Zhu, L. Song, H. Wang, and Q. Guo, “Assessment of fringe pattern decomposition with a cross-correlation index for phase retrieval in fringe projection 3D measurements,” Sensors 18(10), 3578 (2018).
[Crossref]

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. The diagram of the fringe part extraction.
Fig. 2.
Fig. 2. The diagram of the proposed deep learning phase retrieval.
Fig. 3.
Fig. 3. Simulated training dataset.
Fig. 4.
Fig. 4. Experimentally obtained training dataset.
Fig. 5.
Fig. 5. Simulated fringe pattern(a-1), true fringe part (a-2) and true phase (a-3).
Fig. 6.
Fig. 6. Fringe parts from simulated fringe pattern by DNN with patch strategy (DNN1) and without patch strategy (DNN2). (a-1) and (a-2):Extracted fringe parts by DNN1 and DNN2;(b-1) and (b-2): The error of fringe parts corresponding to Figs. 6(a-1) and 6(a-2).
Fig. 7.
Fig. 7. Phase results from simulated fringe pattern by DNN with patch strategy (DNN1) and without patch strategy (DNN2). (a-1) and (a-2):Phase results extracted by DNN1 and DNN2; (b-1) and (b-2):Phase error of Figs. 7(a-1) and 7(a-2);(c) The plots of Figs. 7(a-1) and 7(a-2) in the 255th column.
Fig. 8.
Fig. 8. Fringe parts and phase results from real fringe pattern by DNN with patch strategy (DNN1) and without patch strategy (DNN2). (a-1): Real fringe pattern; (a-2) and (a-3):Extracted fringe parts by DNN1 and DNN2, respectively; (b-1),(b-2) and (b-3): Phase results extracted by the four steps phase shift method, DNN1 and DNN2, respectively.
Fig. 9.
Fig. 9. Real fringe patterns in fringe projection 3D measurement. (a): Fringe pattern of human face. (b): Fringe pattern of human hand.
Fig. 10.
Fig. 10. Fringe part extraction of DNN without label enhancement (DNN) and with label enhancement (DNN_Enhance). (a-1): Extracted fringe part from Fig. 9(a) by DNN without label enhancement;(a-2):Extracted fringe part from Fig. 9(a) by DNN with label enhancement; (b-1): Extracted fringe part from Fig. 9(b) by DNN without label enhancement;(b-2):Extracted fringe part from Fig. 9(b) by DNN with label enhancement.
Fig. 11.
Fig. 11. Phase retrieval of DNN without label enhancement (DNN) and with label enhancement (DNN_Enhance). (a-1): Extracted phase from Fig. 9(a) by DNN without label enhancement;(a-2):Extracted phase from Fig. 9(a) by DNN with label enhancement; (b-1): Extracted phase from Fig. 9(b) by DNN without label enhancement;(b-2):Extracted phase from Fig. 9(b) by DNN with label enhancement; (c-1): Plot of phase from Figs. 11(a-1) and 11(a-2) in 255th column; (c-2): Plot of phase from Figs. 11(b-1) and 11(b-2) in 200th column.
Fig. 12.
Fig. 12. Real fringe pattern in projection fringe 3D measurement. (a): Fringe pattern of human face; (b): Fringe pattern of plastic box.
Fig. 13.
Fig. 13. Phase results by phase shift, FT and DNN methods. (a-1):Phase shift method for Fig. 12(a); (a-2):FT for Fig. 12(a); (a-3):Proposed DNN method for Fig. 12(a); (b-1):Phase shift method for Fig. 12(b); (b-2):FT for Fig. 12(b); (b-3):Proposed DNN method for Fig. 12(b); (c-1): The plots of 255 row of Figs. 13(a-1), 13(a-2) and 13(a-3); (c-2): The plots of 255th column of Figs. 13(b-1), 13(b-2) and 13(b-3).
Fig. 14.
Fig. 14. The experimental fringe pattern of hand under motion with 6 different time.
Fig. 15.
Fig. 15. Phase results of hand under motion at 6 different times by FT method and DNN method.

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

I ( x , y )  =  a ( x , y ) + b ( x , y ) cos ( ϕ ( x , y )  +  2 π f 0 x )  +  n o i s e ,
l o s s ( θ ) = 1 2 N i = 1 N | | R ( p i ; θ ) ( p i f i ) | | F 2 ,
ϕ ( x , y ) + φ c ( x , y ) = arctan ( Im { H ( f ( x , y ) ) } Re { H ( f ( x , y ) ) } ) ,
I ( x , y , 0 )  =  a ( x , y ) + b ( x , y ) cos ( ϕ ( x , y )  +  2 π f 0 x  +  0 )  +  n o i s e , I ( x , y , π / 2 )  =  a ( x , y ) + b ( x , y ) cos ( ϕ ( x , y )  +  2 π f 0 x  +  π / 2 )  +  n o i s e , I ( x , y , π )  =  a ( x , y ) + b ( x , y ) cos ( ϕ ( x , y )  +  2 π f 0 x  +  π )  +  n o i s e , I ( x , y , 3 π / 2 )  =  a ( x , y ) + b ( x , y ) cos ( ϕ ( x , y )  +  2 π f 0 x  + 3 π / 2 )  +  n o i s e .

Metrics