Abstract

The extraction of absolute phase from an interference pattern is a key step for 3D deformation measurement in digital holographic interferometry (DHI) and is an ill-posed problem. Estimating the absolute unwrapped phase becomes even more challenging when the obtained wrapped phase from the interference pattern is noisy. In this paper, we propose a novel multitask deep learning approach for phase reconstruction and 3D deformation measurement in DHI, referred to as TriNet, that has the capability to learn and perform two parallel tasks from the input image. The proposed TriNet has a pyramidal encoder–two-decoder framework for multi-scale information fusion. To our knowledge, TriNet is the first multitask approach to accomplish simultaneous denoising and phase unwrapping of the wrapped phase from the interference fringes in a single step for absolute phase reconstruction. The proposed architecture is more elegant than recent multitask learning methods such as Y-Net and state-of-the-art segmentation approaches such as UNet$++$. Further, performing denoising and phase unwrapping simultaneously enables deformation measurement from the highly noisy wrapped phase of DHI data. The simulations and experimental comparisons demonstrate the efficacy of the proposed approach in absolute phase reconstruction and 3D deformation measurement with respect to the existing conventional methods and state-of-the-art deep learning methods.

© 2021 Optica Publishing Group

Full Article  |  PDF Article
More Like This
Dynamic displacement measurement in digital holographic interferometry using eigenspace analysis

Jagadesh Ramaiah and Rajshekhar Gannavarpu
Appl. Opt. 60(33) 10468-10476 (2021)

High-speed measurement of mechanical micro-deformations with an extended phase range using dual-wavelength digital holographic interferometry

Natalia Munera, Carlos Trujillo, and Jorge Garcia-Sucerquia
Appl. Opt. 61(5) B279-B286 (2022)

References

  • View by:

  1. E. Cuche, F. Bevilacqua, and C. Depeursinge, “Digital holography for quantitative phase-contrast imaging,” Opt. Lett. 24, 291–293 (1999).
    [Crossref]
  2. T. R. Judge and P. Bryanston-Cross, “A review of phase unwrapping techniques in fringe analysis,” Opt. Lasers Eng. 21, 199–239 (1994).
    [Crossref]
  3. R. G. Waghmare, D. Mishra, G. S. Subrahmanyam, E. Banoth, and S. S. Gorthi, “Signal tracking approach for phase estimation in digital holographic interferometry,” Appl. Opt. 53, 4150–4157 (2014).
    [Crossref]
  4. K. Wang, Y. Li, Q. Kemao, J. Di, and J. Zhao, “One-step robust deep learning phase unwrapping,” Opt. Express 27, 15100–15115 (2019).
    [Crossref]
  5. Q. Li, C. Bao, J. Zhao, and Z. Jiang, “A new fast quality-guided flood-fill phase unwrapping algorithm,” J. Phys. Conf. Ser. 1069, 012182 (2018).
    [Crossref]
  6. S. V. D. Jeught, J. Sijbers, and J. J. Dirckx, “Fast Fourier-based phase unwrapping on the graphics processing unit in real-time imaging applications,” J. Imaging 1, 31–44 (2015).
    [Crossref]
  7. V. V. Volkov and Y. Zhu, “Deterministic phase unwrapping in the presence of noise,” Opt. Lett. 28, 2156–2158 (2003).
    [Crossref]
  8. M. Zhao, L. Huang, Q. Zhang, X. Su, A. Asundi, and Q. Kemao, “Quality-guided phase unwrapping technique: comparison of quality maps and guiding strategies,” Appl. Opt. 50, 6214–6224 (2011).
    [Crossref]
  9. S. S. Gorthi, G. Rajshekhar, and P. Rastogi, “Strain estimation in digital holographic interferometry using piecewise polynomial phase approximation based method,” Opt. Express 18, 560–565 (2010).
    [Crossref]
  10. R. G. Waghmare, P. R. Sukumar, G. R. K. S. Subrahmanyam, R. K. Singh, and D. Mishra, “Particle-filter-based phase estimation in digital holographic interferometry,” J. Opt. Soc. Am. A 33, 326–332 (2016).
    [Crossref]
  11. R. G. Waghmare, R. S. S. Gorthi, and D. Mishra, “Wrapped statistics-based phase retrieval from interference fringes,” J. Mod. Opt. 63, 1384–1390 (2016).
    [Crossref]
  12. G. Spoorthi, S. Gorthi, and R. K. S. S. Gorthi, “PhaseNet: a deep convolutional neural network for two-dimensional phase unwrapping,” IEEE Signal Process. Lett. 26, 54–58 (2018).
    [Crossref]
  13. G. Spoorthi, R. K. S. S. Gorthi, and S. Gorthi, “PhaseNet 2.0: phase unwrapping of noisy data based on deep learning approach,” IEEE Trans. Image Process. 29, 4862–4872 (2020).
    [Crossref]
  14. V. K. Sumanth and R. K. S. S. Gorthi, “A deep learning framework for 3D surface profiling of the objects using digital holographic interferometry,” in IEEE International Conference on Image Processing (ICIP) (IEEE, 2020), pp. 2656–2660.
  15. Z. Ren, Z. Xu, and E. Y. Lam, “End-to-end deep learning framework for digital holographic reconstruction,” Adv. Photon. 1, 016004 (2019).
    [Crossref]
  16. K. Wang, J. Dou, Q. Kemao, J. Di, and J. Zhao, “Y-Net: a one-to-two deep learning framework for digital holographic reconstruction,” Opt. Lett. 44, 4765–4768 (2019).
    [Crossref]
  17. Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, “UNet++: designing skip connections to exploit multiscale features in image segmentation,” IEEE Trans. Med. Imaging 39, 1856–1867 (2019).
    [Crossref]
  18. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015), pp. 234–241.
  19. V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: a deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495 (2017).
    [Crossref]
  20. S. Mehta, E. Mercan, J. Bartlett, D. Weaver, J. G. Elmore, and L. Shapiro, “Y-Net: joint segmentation and classification for diagnosis of breast biopsy images,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2018), pp. 893–901.
  21. S. Ioffe and C. Szegedy, “Batch normalization: accelerating deep network training by reducing internal covariate shift,” in International Conference on Machine Learning (PMLR, 2015), pp. 448–456.
  22. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (2012), pp. 1097–1105.
  23. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “PyTorch: an imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems (2019), pp. 8026–8037.
  24. D. Kingma and J. Ba, “Adam: a method for stochastic optimization,” in 3rd International Conference on Learning Representations, San Diego, California (2014).
  25. F. Lovergine, S. Stramaglia, G. Nico, and N. Veneziani, “Fast weighted least squares for solving the phase unwrapping problem,” in IEEE International Geoscience and Remote Sensing Symposium. IGARSS’99 (Cat. No. 99CH36293) (IEEE, 1999), Vol. 2, pp. 1348–1350.

2020 (1)

G. Spoorthi, R. K. S. S. Gorthi, and S. Gorthi, “PhaseNet 2.0: phase unwrapping of noisy data based on deep learning approach,” IEEE Trans. Image Process. 29, 4862–4872 (2020).
[Crossref]

2019 (4)

Z. Ren, Z. Xu, and E. Y. Lam, “End-to-end deep learning framework for digital holographic reconstruction,” Adv. Photon. 1, 016004 (2019).
[Crossref]

K. Wang, J. Dou, Q. Kemao, J. Di, and J. Zhao, “Y-Net: a one-to-two deep learning framework for digital holographic reconstruction,” Opt. Lett. 44, 4765–4768 (2019).
[Crossref]

Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, “UNet++: designing skip connections to exploit multiscale features in image segmentation,” IEEE Trans. Med. Imaging 39, 1856–1867 (2019).
[Crossref]

K. Wang, Y. Li, Q. Kemao, J. Di, and J. Zhao, “One-step robust deep learning phase unwrapping,” Opt. Express 27, 15100–15115 (2019).
[Crossref]

2018 (2)

Q. Li, C. Bao, J. Zhao, and Z. Jiang, “A new fast quality-guided flood-fill phase unwrapping algorithm,” J. Phys. Conf. Ser. 1069, 012182 (2018).
[Crossref]

G. Spoorthi, S. Gorthi, and R. K. S. S. Gorthi, “PhaseNet: a deep convolutional neural network for two-dimensional phase unwrapping,” IEEE Signal Process. Lett. 26, 54–58 (2018).
[Crossref]

2017 (1)

V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: a deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495 (2017).
[Crossref]

2016 (2)

R. G. Waghmare, P. R. Sukumar, G. R. K. S. Subrahmanyam, R. K. Singh, and D. Mishra, “Particle-filter-based phase estimation in digital holographic interferometry,” J. Opt. Soc. Am. A 33, 326–332 (2016).
[Crossref]

R. G. Waghmare, R. S. S. Gorthi, and D. Mishra, “Wrapped statistics-based phase retrieval from interference fringes,” J. Mod. Opt. 63, 1384–1390 (2016).
[Crossref]

2015 (1)

S. V. D. Jeught, J. Sijbers, and J. J. Dirckx, “Fast Fourier-based phase unwrapping on the graphics processing unit in real-time imaging applications,” J. Imaging 1, 31–44 (2015).
[Crossref]

2014 (1)

2011 (1)

2010 (1)

2003 (1)

1999 (1)

1994 (1)

T. R. Judge and P. Bryanston-Cross, “A review of phase unwrapping techniques in fringe analysis,” Opt. Lasers Eng. 21, 199–239 (1994).
[Crossref]

Antiga, L.

A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “PyTorch: an imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems (2019), pp. 8026–8037.

Asundi, A.

Ba, J.

D. Kingma and J. Ba, “Adam: a method for stochastic optimization,” in 3rd International Conference on Learning Representations, San Diego, California (2014).

Badrinarayanan, V.

V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: a deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495 (2017).
[Crossref]

Bai, J.

A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “PyTorch: an imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems (2019), pp. 8026–8037.

Banoth, E.

Bao, C.

Q. Li, C. Bao, J. Zhao, and Z. Jiang, “A new fast quality-guided flood-fill phase unwrapping algorithm,” J. Phys. Conf. Ser. 1069, 012182 (2018).
[Crossref]

Bartlett, J.

S. Mehta, E. Mercan, J. Bartlett, D. Weaver, J. G. Elmore, and L. Shapiro, “Y-Net: joint segmentation and classification for diagnosis of breast biopsy images,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2018), pp. 893–901.

Bevilacqua, F.

Bradbury, J.

A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “PyTorch: an imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems (2019), pp. 8026–8037.

Brox, T.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015), pp. 234–241.

Bryanston-Cross, P.

T. R. Judge and P. Bryanston-Cross, “A review of phase unwrapping techniques in fringe analysis,” Opt. Lasers Eng. 21, 199–239 (1994).
[Crossref]

Chanan, G.

A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “PyTorch: an imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems (2019), pp. 8026–8037.

Chilamkurthy, S.

A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “PyTorch: an imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems (2019), pp. 8026–8037.

Chintala, S.

A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “PyTorch: an imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems (2019), pp. 8026–8037.

Cipolla, R.

V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: a deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495 (2017).
[Crossref]

Cuche, E.

Depeursinge, C.

Desmaison, A.

A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “PyTorch: an imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems (2019), pp. 8026–8037.

DeVito, Z.

A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “PyTorch: an imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems (2019), pp. 8026–8037.

Di, J.

Dirckx, J. J.

S. V. D. Jeught, J. Sijbers, and J. J. Dirckx, “Fast Fourier-based phase unwrapping on the graphics processing unit in real-time imaging applications,” J. Imaging 1, 31–44 (2015).
[Crossref]

Dou, J.

Elmore, J. G.

S. Mehta, E. Mercan, J. Bartlett, D. Weaver, J. G. Elmore, and L. Shapiro, “Y-Net: joint segmentation and classification for diagnosis of breast biopsy images,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2018), pp. 893–901.

Fang, L.

A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “PyTorch: an imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems (2019), pp. 8026–8037.

Fischer, P.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015), pp. 234–241.

Gimelshein, N.

A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “PyTorch: an imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems (2019), pp. 8026–8037.

Gorthi, R. K. S. S.

G. Spoorthi, R. K. S. S. Gorthi, and S. Gorthi, “PhaseNet 2.0: phase unwrapping of noisy data based on deep learning approach,” IEEE Trans. Image Process. 29, 4862–4872 (2020).
[Crossref]

G. Spoorthi, S. Gorthi, and R. K. S. S. Gorthi, “PhaseNet: a deep convolutional neural network for two-dimensional phase unwrapping,” IEEE Signal Process. Lett. 26, 54–58 (2018).
[Crossref]

V. K. Sumanth and R. K. S. S. Gorthi, “A deep learning framework for 3D surface profiling of the objects using digital holographic interferometry,” in IEEE International Conference on Image Processing (ICIP) (IEEE, 2020), pp. 2656–2660.

Gorthi, R. S. S.

R. G. Waghmare, R. S. S. Gorthi, and D. Mishra, “Wrapped statistics-based phase retrieval from interference fringes,” J. Mod. Opt. 63, 1384–1390 (2016).
[Crossref]

Gorthi, S.

G. Spoorthi, R. K. S. S. Gorthi, and S. Gorthi, “PhaseNet 2.0: phase unwrapping of noisy data based on deep learning approach,” IEEE Trans. Image Process. 29, 4862–4872 (2020).
[Crossref]

G. Spoorthi, S. Gorthi, and R. K. S. S. Gorthi, “PhaseNet: a deep convolutional neural network for two-dimensional phase unwrapping,” IEEE Signal Process. Lett. 26, 54–58 (2018).
[Crossref]

Gorthi, S. S.

Gross, S.

A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “PyTorch: an imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems (2019), pp. 8026–8037.

Hinton, G. E.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (2012), pp. 1097–1105.

Huang, L.

Ioffe, S.

S. Ioffe and C. Szegedy, “Batch normalization: accelerating deep network training by reducing internal covariate shift,” in International Conference on Machine Learning (PMLR, 2015), pp. 448–456.

Jeught, S. V. D.

S. V. D. Jeught, J. Sijbers, and J. J. Dirckx, “Fast Fourier-based phase unwrapping on the graphics processing unit in real-time imaging applications,” J. Imaging 1, 31–44 (2015).
[Crossref]

Jiang, Z.

Q. Li, C. Bao, J. Zhao, and Z. Jiang, “A new fast quality-guided flood-fill phase unwrapping algorithm,” J. Phys. Conf. Ser. 1069, 012182 (2018).
[Crossref]

Judge, T. R.

T. R. Judge and P. Bryanston-Cross, “A review of phase unwrapping techniques in fringe analysis,” Opt. Lasers Eng. 21, 199–239 (1994).
[Crossref]

Kemao, Q.

Kendall, A.

V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: a deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495 (2017).
[Crossref]

Killeen, T.

A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “PyTorch: an imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems (2019), pp. 8026–8037.

Kingma, D.

D. Kingma and J. Ba, “Adam: a method for stochastic optimization,” in 3rd International Conference on Learning Representations, San Diego, California (2014).

Köpf, A.

A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “PyTorch: an imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems (2019), pp. 8026–8037.

Krizhevsky, A.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (2012), pp. 1097–1105.

Lam, E. Y.

Z. Ren, Z. Xu, and E. Y. Lam, “End-to-end deep learning framework for digital holographic reconstruction,” Adv. Photon. 1, 016004 (2019).
[Crossref]

Lerer, A.

A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “PyTorch: an imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems (2019), pp. 8026–8037.

Li, Q.

Q. Li, C. Bao, J. Zhao, and Z. Jiang, “A new fast quality-guided flood-fill phase unwrapping algorithm,” J. Phys. Conf. Ser. 1069, 012182 (2018).
[Crossref]

Li, Y.

Liang, J.

Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, “UNet++: designing skip connections to exploit multiscale features in image segmentation,” IEEE Trans. Med. Imaging 39, 1856–1867 (2019).
[Crossref]

Lin, Z.

A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “PyTorch: an imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems (2019), pp. 8026–8037.

Lovergine, F.

F. Lovergine, S. Stramaglia, G. Nico, and N. Veneziani, “Fast weighted least squares for solving the phase unwrapping problem,” in IEEE International Geoscience and Remote Sensing Symposium. IGARSS’99 (Cat. No. 99CH36293) (IEEE, 1999), Vol. 2, pp. 1348–1350.

Massa, F.

A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “PyTorch: an imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems (2019), pp. 8026–8037.

Mehta, S.

S. Mehta, E. Mercan, J. Bartlett, D. Weaver, J. G. Elmore, and L. Shapiro, “Y-Net: joint segmentation and classification for diagnosis of breast biopsy images,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2018), pp. 893–901.

Mercan, E.

S. Mehta, E. Mercan, J. Bartlett, D. Weaver, J. G. Elmore, and L. Shapiro, “Y-Net: joint segmentation and classification for diagnosis of breast biopsy images,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2018), pp. 893–901.

Mishra, D.

Nico, G.

F. Lovergine, S. Stramaglia, G. Nico, and N. Veneziani, “Fast weighted least squares for solving the phase unwrapping problem,” in IEEE International Geoscience and Remote Sensing Symposium. IGARSS’99 (Cat. No. 99CH36293) (IEEE, 1999), Vol. 2, pp. 1348–1350.

Paszke, A.

A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “PyTorch: an imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems (2019), pp. 8026–8037.

Raison, M.

A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “PyTorch: an imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems (2019), pp. 8026–8037.

Rajshekhar, G.

Rastogi, P.

Ren, Z.

Z. Ren, Z. Xu, and E. Y. Lam, “End-to-end deep learning framework for digital holographic reconstruction,” Adv. Photon. 1, 016004 (2019).
[Crossref]

Ronneberger, O.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015), pp. 234–241.

Shapiro, L.

S. Mehta, E. Mercan, J. Bartlett, D. Weaver, J. G. Elmore, and L. Shapiro, “Y-Net: joint segmentation and classification for diagnosis of breast biopsy images,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2018), pp. 893–901.

Siddiquee, M. M. R.

Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, “UNet++: designing skip connections to exploit multiscale features in image segmentation,” IEEE Trans. Med. Imaging 39, 1856–1867 (2019).
[Crossref]

Sijbers, J.

S. V. D. Jeught, J. Sijbers, and J. J. Dirckx, “Fast Fourier-based phase unwrapping on the graphics processing unit in real-time imaging applications,” J. Imaging 1, 31–44 (2015).
[Crossref]

Singh, R. K.

Spoorthi, G.

G. Spoorthi, R. K. S. S. Gorthi, and S. Gorthi, “PhaseNet 2.0: phase unwrapping of noisy data based on deep learning approach,” IEEE Trans. Image Process. 29, 4862–4872 (2020).
[Crossref]

G. Spoorthi, S. Gorthi, and R. K. S. S. Gorthi, “PhaseNet: a deep convolutional neural network for two-dimensional phase unwrapping,” IEEE Signal Process. Lett. 26, 54–58 (2018).
[Crossref]

Steiner, B.

A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “PyTorch: an imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems (2019), pp. 8026–8037.

Stramaglia, S.

F. Lovergine, S. Stramaglia, G. Nico, and N. Veneziani, “Fast weighted least squares for solving the phase unwrapping problem,” in IEEE International Geoscience and Remote Sensing Symposium. IGARSS’99 (Cat. No. 99CH36293) (IEEE, 1999), Vol. 2, pp. 1348–1350.

Su, X.

Subrahmanyam, G. R. K. S.

Subrahmanyam, G. S.

Sukumar, P. R.

Sumanth, V. K.

V. K. Sumanth and R. K. S. S. Gorthi, “A deep learning framework for 3D surface profiling of the objects using digital holographic interferometry,” in IEEE International Conference on Image Processing (ICIP) (IEEE, 2020), pp. 2656–2660.

Sutskever, I.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (2012), pp. 1097–1105.

Szegedy, C.

S. Ioffe and C. Szegedy, “Batch normalization: accelerating deep network training by reducing internal covariate shift,” in International Conference on Machine Learning (PMLR, 2015), pp. 448–456.

Tajbakhsh, N.

Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, “UNet++: designing skip connections to exploit multiscale features in image segmentation,” IEEE Trans. Med. Imaging 39, 1856–1867 (2019).
[Crossref]

Tejani, A.

A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “PyTorch: an imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems (2019), pp. 8026–8037.

Veneziani, N.

F. Lovergine, S. Stramaglia, G. Nico, and N. Veneziani, “Fast weighted least squares for solving the phase unwrapping problem,” in IEEE International Geoscience and Remote Sensing Symposium. IGARSS’99 (Cat. No. 99CH36293) (IEEE, 1999), Vol. 2, pp. 1348–1350.

Volkov, V. V.

Waghmare, R. G.

Wang, K.

Weaver, D.

S. Mehta, E. Mercan, J. Bartlett, D. Weaver, J. G. Elmore, and L. Shapiro, “Y-Net: joint segmentation and classification for diagnosis of breast biopsy images,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2018), pp. 893–901.

Xu, Z.

Z. Ren, Z. Xu, and E. Y. Lam, “End-to-end deep learning framework for digital holographic reconstruction,” Adv. Photon. 1, 016004 (2019).
[Crossref]

Yang, E.

A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “PyTorch: an imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems (2019), pp. 8026–8037.

Zhang, Q.

Zhao, J.

Zhao, M.

Zhou, Z.

Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, “UNet++: designing skip connections to exploit multiscale features in image segmentation,” IEEE Trans. Med. Imaging 39, 1856–1867 (2019).
[Crossref]

Zhu, Y.

Adv. Photon. (1)

Z. Ren, Z. Xu, and E. Y. Lam, “End-to-end deep learning framework for digital holographic reconstruction,” Adv. Photon. 1, 016004 (2019).
[Crossref]

Appl. Opt. (2)

IEEE Signal Process. Lett. (1)

G. Spoorthi, S. Gorthi, and R. K. S. S. Gorthi, “PhaseNet: a deep convolutional neural network for two-dimensional phase unwrapping,” IEEE Signal Process. Lett. 26, 54–58 (2018).
[Crossref]

IEEE Trans. Image Process. (1)

G. Spoorthi, R. K. S. S. Gorthi, and S. Gorthi, “PhaseNet 2.0: phase unwrapping of noisy data based on deep learning approach,” IEEE Trans. Image Process. 29, 4862–4872 (2020).
[Crossref]

IEEE Trans. Med. Imaging (1)

Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, “UNet++: designing skip connections to exploit multiscale features in image segmentation,” IEEE Trans. Med. Imaging 39, 1856–1867 (2019).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: a deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495 (2017).
[Crossref]

J. Imaging (1)

S. V. D. Jeught, J. Sijbers, and J. J. Dirckx, “Fast Fourier-based phase unwrapping on the graphics processing unit in real-time imaging applications,” J. Imaging 1, 31–44 (2015).
[Crossref]

J. Mod. Opt. (1)

R. G. Waghmare, R. S. S. Gorthi, and D. Mishra, “Wrapped statistics-based phase retrieval from interference fringes,” J. Mod. Opt. 63, 1384–1390 (2016).
[Crossref]

J. Opt. Soc. Am. A (1)

J. Phys. Conf. Ser. (1)

Q. Li, C. Bao, J. Zhao, and Z. Jiang, “A new fast quality-guided flood-fill phase unwrapping algorithm,” J. Phys. Conf. Ser. 1069, 012182 (2018).
[Crossref]

Opt. Express (2)

Opt. Lasers Eng. (1)

T. R. Judge and P. Bryanston-Cross, “A review of phase unwrapping techniques in fringe analysis,” Opt. Lasers Eng. 21, 199–239 (1994).
[Crossref]

Opt. Lett. (3)

Other (8)

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015), pp. 234–241.

V. K. Sumanth and R. K. S. S. Gorthi, “A deep learning framework for 3D surface profiling of the objects using digital holographic interferometry,” in IEEE International Conference on Image Processing (ICIP) (IEEE, 2020), pp. 2656–2660.

S. Mehta, E. Mercan, J. Bartlett, D. Weaver, J. G. Elmore, and L. Shapiro, “Y-Net: joint segmentation and classification for diagnosis of breast biopsy images,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2018), pp. 893–901.

S. Ioffe and C. Szegedy, “Batch normalization: accelerating deep network training by reducing internal covariate shift,” in International Conference on Machine Learning (PMLR, 2015), pp. 448–456.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (2012), pp. 1097–1105.

A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “PyTorch: an imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems (2019), pp. 8026–8037.

D. Kingma and J. Ba, “Adam: a method for stochastic optimization,” in 3rd International Conference on Learning Representations, San Diego, California (2014).

F. Lovergine, S. Stramaglia, G. Nico, and N. Veneziani, “Fast weighted least squares for solving the phase unwrapping problem,” in IEEE International Geoscience and Remote Sensing Symposium. IGARSS’99 (Cat. No. 99CH36293) (IEEE, 1999), Vol. 2, pp. 1348–1350.

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Flow diagram of the proposed approach for two-dimensional phase unwrapping. The architectural details of the proposed TriNet are shown in Fig. 2.
Fig. 2.
Fig. 2. TriNet for two-dimensional phase unwrapping. Right wing of the architecture predicts the fringe order in terms of segmentation labels. Left wing of the architectures denoises the noisy input wrapped phase. The actual 3D profile is reconstructed by adding a denoised phase pattern to $2\pi \times$ the predicted fringe order. Note that lightweight arrows represent dense connections, and strong vertical arrows represent downsample ($\times 2$) operation.
Fig. 3.
Fig. 3. Example of synthetic sample $f(x,y)$ generated for this work. The governing equation of $f(x,y)$ is given in Eq. (2), and that of wrapped phase is in Eq. (3).
Fig. 4.
Fig. 4. Convergence of all the loss curves employed for training the architecture. (a) Loss curve for denoising branch, (b) loss curve for segmentation branch, and (c) total loss curve.
Fig. 5.
Fig. 5. Noisy wrapped phase along with corresponding 3D profiles (ground truths) of two random test samples. The colorbar shows the ground truth deformation profile of the objects in radiance as related by Eq. (1).
Fig. 6.
Fig. 6. Complete ablation study for both decoder wings of the architecture and the UNet$++$ architecture directly predicting the depth profile in radians through regression. The output of the left side decoder in TriNet is passed through QGP and CLSPU for phase unwrapping, and the right side decoder imitates the UNet$++$ architecture, which is considered for phase unwrapping in lines similar to that in PhaseNet 2.0. (a) TriNet denoise + QGP MSE: 0.0042. (b) TriNet denoiser + CLSPU MSE: 0.0039. (c) TriNet fringe order predictor (UNet$++$) MSE: 2.613. (d) UNet$++$ as regression framework MSE: 9.842. (e) TriNet MSE: 0.0018.
Fig. 7.
Fig. 7. Simple 2D Gaussian is considered for analyzing the performance of conventional and deep learning methods. (a) Wrapped phase at 0 dB. (b) Ground truth. (c) QGP. (d) CLSPU. (e) WKF. (f) PhaseNet 2.0. (g) TriNet.
Fig. 8.
Fig. 8. Performance of the deep learning methods on random test samples 1 and 2 (of Fig. 5) at ${-}10\;{\rm dB}$ SNR. First and third rows show predicted 3D profiles by PhaseNet 2.0, UNet$++$ without and with post-processing and by the proposed TriNet. Second and forth rows show corresponding RMSE maps. Note the differences in the colorbars representing error ranges/depth profiles in radians.
Fig. 9.
Fig. 9. Variation of RMSE in phase reconstruction for the proposed approach versus other deep learning approaches at various SNRs of the interference fringes.
Fig. 10.
Fig. 10. Performance of the proposed TriNet and other conventional and deep learning methods on real deformation measurement (shown in radians). (a) Interference pattern. (b) Noisy wrapped phase. (c) QGP. (d) CLSPU. (e) WKF. (f) PhaseNet 2.0. (g) UNett$++$ (as an ablation study). (h) TriNet.

Tables (3)

Tables Icon

Table 1. Ablation Study in Terms of RMSE for Left (Denoising) and Right (Unwrapping) Decoder Resultsa

Tables Icon

Table 2. Ablation Study in Terms of RMSE for the Proposed Network at Different Noise Levelsa

Tables Icon

Table 3. RMSE Comparisons across Different Noise Levels for 3D Profiling by the Approaches Considereda

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

ϕ ( x , y ) = 2 π λ Δ D ( x , y ) ,
f ( x , y ) = a ( x , y ) e j ϕ ( x , y ) + η ( x , y ) ,
ψ ( x , y ) = tan 1 ( [ f ( x , y ) ] [ f ( x , y ) ] ) .
k ( x , y ) = r o u n d ( ϕ ( x , y ) ψ ( x , y ) 2 π ) .
L c e = 1 N k = 1 N i = 1 r o w s j = 1 c o l s t = 0 C y kijt log ( y kijt ) ,
L m s e = 1 N k = 1 N ( Y k Y k ) 2 ,
L t o t a l = L ce + L mse .

Metrics