Abstract

Phase unwrapping is an important but challenging issue in phase measurement. Even with the research efforts of a few decades, unfortunately, the problem remains not well solved, especially when heavy noise and aliasing (undersampling) are present. We propose a database generation method for phase-type objects and a one-step deep learning phase unwrapping method. With a trained deep neural network, the unseen phase fields of living mouse osteoblasts and dynamic candle flame are successfully unwrapped, demonstrating that the complicated nonlinear phase unwrapping task can be directly fulfilled in one step by a single deep neural network. Excellent anti-noise and anti-aliasing performances outperforming classical methods are highlighted in this paper.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery

Yichen Wu, Yair Rivenson, Yibo Zhang, Zhensong Wei, Harun Günaydin, Xing Lin, and Aydogan Ozcan
Optica 5(6) 704-710 (2018)

Phase unwrapping in optical metrology via denoised and convolutional segmentation networks

Junchao Zhang, Xiaobo Tian, Jianbo Shao, Haibo Luo, and Rongguang Liang
Opt. Express 27(10) 14903-14912 (2019)

Automatic phase aberration compensation for digital holographic microscopy based on deep learning background detection

Thanh Nguyen, Vy Bui, Van Lam, Christopher B. Raub, Lin-Ching Chang, and George Nehmetallah
Opt. Express 25(13) 15043-15057 (2017)

References

  • View by:
  • |
  • |
  • |

  1. R. M. Goldstein, H. A. Zebker, and C. L. Werner, “Satellite radar interferometry: two-dimensional phase unwrapping,” Radio Sci. 23(4), 713–720 (1988).
    [Crossref]
  2. D. W. Robinson, G. T. Reid, and P. de Groot, “Interferogram Analysis: Digital Fringe Pattern Measurement Techniques,” Phys. Today 47(8), 66 (1994).
    [Crossref]
  3. D. L. Fried, “Least-square fitting a wave-front distortion estimate to an array of phase-difference measurements,” J. Opt. Soc. Am. 67(3), 370–375 (1977).
    [Crossref]
  4. R. H. Hudgin, “Wave-front reconstruction for compens ated imaging,” J. Opt. Soc. Am. 67(3), 375–378 (1977).
    [Crossref]
  5. S. Moon-Ho Song, S. Napel, N. J. Pelc, and G. H. Glover, “Phase unwrapping of MR phase images using Poisson equation,” IEEE Trans. Image Process. 4(5), 667–676 (1995).
    [Crossref] [PubMed]
  6. M. D. Pritt and D. C. Ghiglia, Two-dimensional phase unwrapping: theory, algorithms, and software (Wiley, 1998).
  7. M. D. Pritt and J. S. Shipman, “Least-Squares Two-Dimensional Phase Unwrapping Using Fft’s,” IEEE Trans. Geosci. Remote Sens. 32(3), 706–708 (1994).
    [Crossref]
  8. M. Zhao, L. Huang, Q. Zhang, X. Su, A. Asundi, and Q. Kemao, “Quality-guided phase unwrapping technique: comparison of quality maps and guiding strategies,” Appl. Opt. 50(33), 6214–6224 (2011).
    [Crossref] [PubMed]
  9. J. M. Huntley and H. Saldner, “Temporal phase-unwrapping algorithm for automated interferogram analysis,” Appl. Opt. 32(17), 3047–3052 (1993).
    [Crossref] [PubMed]
  10. W. S. McClulloch and W. Pitts, “A logical calculus of the ideas immanent in neurons activity,” Bull. Math. Biophys. 5(4), 115–133 (1943).
    [Crossref]
  11. F. Rosenblatt, “The perceptron: a probabilistic model for information storage and organization in the brain,” Psychol. Rev. 65(6), 386–408 (1958).
    [Crossref] [PubMed]
  12. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323(6088), 533–536 (1986).
    [Crossref]
  13. S. Hochreiter, “Untersuchungen zu dynamischen neuronalen Netzen,” Diploma, Technische Universität München 91(1) (1991).
  14. G. E. Hinton, S. Osindero, and Y. W. Teh, “A fast learning algorithm for deep belief nets,” Neural Comput. 18(7), 1527–1554 (2006).
    [Crossref] [PubMed]
  15. V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of the 27th International Conference on Machine Learning (ICML-10, 2010) pp. 807–814.
  16. S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” https://arxiv.org/abs/1502.03167 .
  17. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (Curran Associates, Inc., 2012), pp. 1097–1105.
  18. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition (IEEE, 2015), pp. 1–9.
  19. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition (IEEE, 2016), pp. 2818–2826.
    [Crossref]
  20. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition (IEEE, 2016), pp. 770–778.
  21. C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
    [Crossref] [PubMed]
  22. W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1874–1883.
    [Crossref]
  23. J. Kim, J. Kwon Lee, and K. Mu Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition (IEEE, 2016), pp. 1646–1654.
    [Crossref]
  24. Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017).
    [Crossref]
  25. M. T. McCann, E. Froustey, M. Unser, M. Unser, M. Unser, and Kyong Hwan Jin, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26(9), 4509–4522 (2017).
    [Crossref] [PubMed]
  26. S. Wang, Z. Su, L. Ying, X. Peng, S. Zhu, F. Liang, D. Feng, and D. Liang, “Accelerating magnetic resonance imaging via deep learning,” in Proceedings of IEEE International Symposium on Biomedical Imaging (IEEE, 2016) pp. 514–517.
    [Crossref]
  27. S. Antholzer, M. Haltmeier, and J. Schwab, “Deep learning for photoacoustic tomography from sparse data,” Inverse Probl. Sci. Eng. 27(7), 987–1005 (2018).
    [PubMed]
  28. Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7(2), 17141 (2018).
    [Crossref] [PubMed]
  29. H. Wang, M. Lyu, and G. Situ, “eHoloNet: a learning-based end-to-end approach for in-line digital holographic reconstruction,” Opt. Express 26(18), 22603–22614 (2018).
    [Crossref] [PubMed]
  30. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4(9), 1117–1125 (2017).
    [Crossref]
  31. T. Pitkäaho, A. Manninen, and T. J. Naughton, “Focus prediction in digital holographic microscopy using deep convolutional neural networks,” Appl. Opt. 58(5), A202–A208 (2019).
    [Crossref] [PubMed]
  32. Z. Ren, Z. Xu, and E. Y. Lam, “Learning-based nonparametric autofocusing for digital holography,” Optica 5(4), 337–344 (2018).
    [Crossref]
  33. T. Shimobaba, T. Takahashi, Y. Yamamoto, Y. Endo, A. Shiraki, T. Nishitsuji, N. Hoshikawa, T. Kakue, and T. Ito, “Digital holographic particle volume reconstruction using a deep neural network,” Appl. Opt. 58(8), 1900–1906 (2019).
    [Crossref] [PubMed]
  34. X. Yuan and Y. Pu, “Parallel lensless compressive imaging via deep convolutional neural networks,” Opt. Express 26(2), 1962–1977 (2018).
    [Crossref] [PubMed]
  35. R. Horisaki, R. Takagi, and J. Tanida, “Learning-based imaging through scattering media,” Opt. Express 24(13), 13738–13743 (2016).
    [Crossref] [PubMed]
  36. M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
    [Crossref] [PubMed]
  37. Y. Sun, Z. Xia, and U. S. Kamilov, “Efficient and accurate inversion of multiple scattering with deep learning,” Opt. Express 26(11), 14678–14688 (2018).
    [Crossref] [PubMed]
  38. P. Wang and J. Di, “Deep learning-based object classification through multimode fiber via a CNN-architecture SpeckleNet,” Appl. Opt. 57(28), 8258–8263 (2018).
    [Crossref] [PubMed]
  39. S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5(7), 803–813 (2018).
    [Crossref]
  40. W. Schwartzkopf, T. E. Milner, J. Ghosh, B. L. Evans, and A. C. Bovik, “Two-Dimensional Phase Unwrapping Using Neural Networks,” in Proceedings of IEEE Southwest Symposium on Image Analysis and Interpretation (IEEE, 2000) pp. 274–277.
    [Crossref]
  41. G. E. Spoorthi, S. Gorthi, and R. K. S. S. Gorthi, “PhaseNet: A Deep Convolutional Neural Network for Two-Dimensional Phase Unwrapping,” IEEE Signal Process. Lett. 26(1), 54–58 (2019).
    [Crossref]
  42. G. Dardikman and N. T. Shaked, “Phase Unwrapping Using Residual Neural Networks,” in Imaging and Applied Optics, OSA Technical Digest (Optical Society of America, 2018), paper CW3B.5.
  43. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” https://arxiv.org/abs/1412.6980 .
  44. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015) pp. 234–241.
    [Crossref]
  45. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
    [Crossref] [PubMed]
  46. Q. Kemao, “Two-dimensional windowed Fourier transform for fringe pattern analysis: Principles, applications and implementations,” Opt. Lasers Eng. 45(2), 304–317 (2007).
    [Crossref]
  47. M. R. Teague, “Deterministic phase retrieval: a Green’s function solution,” J. Opt. Soc. Am. 73(11), 1434–1441 (1983).
    [Crossref]
  48. Y. Li, J. Di, C. Ma, J. Zhang, J. Zhong, K. Wang, T. Xi, and J. Zhao, “Quantitative phase microscopy for cellular dynamics based on transport of intensity equation,” Opt. Express 26(1), 586–593 (2018).
    [Crossref] [PubMed]
  49. A. Barty, K. A. Nugent, D. Paganin, and A. Roberts, “Quantitative optical phase microscopy,” Opt. Lett. 23(11), 817–819 (1998).
    [Crossref] [PubMed]
  50. L. Waller, Y. Luo, S. Y. Yang, and G. Barbastathis, “Transport of intensity phase imaging in a volume holographic microscope,” Opt. Lett. 35(17), 2961–2963 (2010).
    [Crossref] [PubMed]
  51. C. Zuo, Q. Chen, W. Qu, and A. Asundi, “High-speed transport-of-intensity phase microscopy with an electrically tunable lens,” Opt. Express 21(20), 24060–24075 (2013).
    [Crossref] [PubMed]
  52. C. Zuo, Q. Chen, Y. Yu, and A. Asundi, “Transport-of-intensity phase imaging using Savitzky-Golay differentiation filter--theory and applications,” Opt. Express 21(5), 5346–5362 (2013).
    [Crossref] [PubMed]
  53. W. Yu, X. Tian, X. He, X. Song, L. Xue, C. Liu, and S. Wang, “Real time quantitative phase microscopy based on single-shot transport of intensity equation (ssTIE) method,” Appl. Phys. Lett. 109(7), 071112 (2016).
    [Crossref]
  54. C. Zuo, Q. Chen, W. Qu, and A. Asundi, “Noninterferometric single-shot quantitative phase microscopy,” Opt. Lett. 38(18), 3538–3541 (2013).
    [Crossref] [PubMed]
  55. Z. Jingshan, R. A. Claus, J. Dauwels, L. Tian, and L. Waller, “Transport of Intensity phase imaging by intensity spectrum fitting of exponentially spaced defocus planes,” Opt. Express 22(9), 10661–10674 (2014).
    [Crossref] [PubMed]

2019 (3)

2018 (9)

Y. Li, J. Di, C. Ma, J. Zhang, J. Zhong, K. Wang, T. Xi, and J. Zhao, “Quantitative phase microscopy for cellular dynamics based on transport of intensity equation,” Opt. Express 26(1), 586–593 (2018).
[Crossref] [PubMed]

X. Yuan and Y. Pu, “Parallel lensless compressive imaging via deep convolutional neural networks,” Opt. Express 26(2), 1962–1977 (2018).
[Crossref] [PubMed]

Z. Ren, Z. Xu, and E. Y. Lam, “Learning-based nonparametric autofocusing for digital holography,” Optica 5(4), 337–344 (2018).
[Crossref]

Y. Sun, Z. Xia, and U. S. Kamilov, “Efficient and accurate inversion of multiple scattering with deep learning,” Opt. Express 26(11), 14678–14688 (2018).
[Crossref] [PubMed]

P. Wang and J. Di, “Deep learning-based object classification through multimode fiber via a CNN-architecture SpeckleNet,” Appl. Opt. 57(28), 8258–8263 (2018).
[Crossref] [PubMed]

S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5(7), 803–813 (2018).
[Crossref]

S. Antholzer, M. Haltmeier, and J. Schwab, “Deep learning for photoacoustic tomography from sparse data,” Inverse Probl. Sci. Eng. 27(7), 987–1005 (2018).
[PubMed]

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7(2), 17141 (2018).
[Crossref] [PubMed]

H. Wang, M. Lyu, and G. Situ, “eHoloNet: a learning-based end-to-end approach for in-line digital holographic reconstruction,” Opt. Express 26(18), 22603–22614 (2018).
[Crossref] [PubMed]

2017 (4)

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4(9), 1117–1125 (2017).
[Crossref]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017).
[Crossref]

M. T. McCann, E. Froustey, M. Unser, M. Unser, M. Unser, and Kyong Hwan Jin, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26(9), 4509–4522 (2017).
[Crossref] [PubMed]

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref] [PubMed]

2016 (3)

C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
[Crossref] [PubMed]

R. Horisaki, R. Takagi, and J. Tanida, “Learning-based imaging through scattering media,” Opt. Express 24(13), 13738–13743 (2016).
[Crossref] [PubMed]

W. Yu, X. Tian, X. He, X. Song, L. Xue, C. Liu, and S. Wang, “Real time quantitative phase microscopy based on single-shot transport of intensity equation (ssTIE) method,” Appl. Phys. Lett. 109(7), 071112 (2016).
[Crossref]

2014 (1)

2013 (3)

2011 (1)

2010 (1)

2007 (1)

Q. Kemao, “Two-dimensional windowed Fourier transform for fringe pattern analysis: Principles, applications and implementations,” Opt. Lasers Eng. 45(2), 304–317 (2007).
[Crossref]

2006 (1)

G. E. Hinton, S. Osindero, and Y. W. Teh, “A fast learning algorithm for deep belief nets,” Neural Comput. 18(7), 1527–1554 (2006).
[Crossref] [PubMed]

2004 (1)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref] [PubMed]

1998 (1)

1995 (1)

S. Moon-Ho Song, S. Napel, N. J. Pelc, and G. H. Glover, “Phase unwrapping of MR phase images using Poisson equation,” IEEE Trans. Image Process. 4(5), 667–676 (1995).
[Crossref] [PubMed]

1994 (2)

M. D. Pritt and J. S. Shipman, “Least-Squares Two-Dimensional Phase Unwrapping Using Fft’s,” IEEE Trans. Geosci. Remote Sens. 32(3), 706–708 (1994).
[Crossref]

D. W. Robinson, G. T. Reid, and P. de Groot, “Interferogram Analysis: Digital Fringe Pattern Measurement Techniques,” Phys. Today 47(8), 66 (1994).
[Crossref]

1993 (1)

1988 (1)

R. M. Goldstein, H. A. Zebker, and C. L. Werner, “Satellite radar interferometry: two-dimensional phase unwrapping,” Radio Sci. 23(4), 713–720 (1988).
[Crossref]

1986 (1)

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323(6088), 533–536 (1986).
[Crossref]

1983 (1)

1977 (2)

1958 (1)

F. Rosenblatt, “The perceptron: a probabilistic model for information storage and organization in the brain,” Psychol. Rev. 65(6), 386–408 (1958).
[Crossref] [PubMed]

1943 (1)

W. S. McClulloch and W. Pitts, “A logical calculus of the ideas immanent in neurons activity,” Bull. Math. Biophys. 5(4), 115–133 (1943).
[Crossref]

Aitken, A. P.

W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1874–1883.
[Crossref]

Anguelov, D.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition (IEEE, 2015), pp. 1–9.

Antholzer, S.

S. Antholzer, M. Haltmeier, and J. Schwab, “Deep learning for photoacoustic tomography from sparse data,” Inverse Probl. Sci. Eng. 27(7), 987–1005 (2018).
[PubMed]

Asundi, A.

Barbastathis, G.

Barty, A.

Bishop, R.

W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1874–1883.
[Crossref]

Bovik, A. C.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref] [PubMed]

W. Schwartzkopf, T. E. Milner, J. Ghosh, B. L. Evans, and A. C. Bovik, “Two-Dimensional Phase Unwrapping Using Neural Networks,” in Proceedings of IEEE Southwest Symposium on Image Analysis and Interpretation (IEEE, 2000) pp. 274–277.
[Crossref]

Brox, T.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015) pp. 234–241.
[Crossref]

Caballero, J.

W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1874–1883.
[Crossref]

Chen, N.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref] [PubMed]

Chen, Q.

Claus, R. A.

Dardikman, G.

G. Dardikman and N. T. Shaked, “Phase Unwrapping Using Residual Neural Networks,” in Imaging and Applied Optics, OSA Technical Digest (Optical Society of America, 2018), paper CW3B.5.

Dauwels, J.

de Groot, P.

D. W. Robinson, G. T. Reid, and P. de Groot, “Interferogram Analysis: Digital Fringe Pattern Measurement Techniques,” Phys. Today 47(8), 66 (1994).
[Crossref]

Deng, M.

Di, J.

Dong, C.

C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
[Crossref] [PubMed]

Endo, Y.

Erhan, D.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition (IEEE, 2015), pp. 1–9.

Evans, B. L.

W. Schwartzkopf, T. E. Milner, J. Ghosh, B. L. Evans, and A. C. Bovik, “Two-Dimensional Phase Unwrapping Using Neural Networks,” in Proceedings of IEEE Southwest Symposium on Image Analysis and Interpretation (IEEE, 2000) pp. 274–277.
[Crossref]

Feng, D.

S. Wang, Z. Su, L. Ying, X. Peng, S. Zhu, F. Liang, D. Feng, and D. Liang, “Accelerating magnetic resonance imaging via deep learning,” in Proceedings of IEEE International Symposium on Biomedical Imaging (IEEE, 2016) pp. 514–517.
[Crossref]

Fischer, P.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015) pp. 234–241.
[Crossref]

Fried, D. L.

Froustey, E.

M. T. McCann, E. Froustey, M. Unser, M. Unser, M. Unser, and Kyong Hwan Jin, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26(9), 4509–4522 (2017).
[Crossref] [PubMed]

Ghiglia, D. C.

M. D. Pritt and D. C. Ghiglia, Two-dimensional phase unwrapping: theory, algorithms, and software (Wiley, 1998).

Ghosh, J.

W. Schwartzkopf, T. E. Milner, J. Ghosh, B. L. Evans, and A. C. Bovik, “Two-Dimensional Phase Unwrapping Using Neural Networks,” in Proceedings of IEEE Southwest Symposium on Image Analysis and Interpretation (IEEE, 2000) pp. 274–277.
[Crossref]

Glover, G. H.

S. Moon-Ho Song, S. Napel, N. J. Pelc, and G. H. Glover, “Phase unwrapping of MR phase images using Poisson equation,” IEEE Trans. Image Process. 4(5), 667–676 (1995).
[Crossref] [PubMed]

Goldstein, R. M.

R. M. Goldstein, H. A. Zebker, and C. L. Werner, “Satellite radar interferometry: two-dimensional phase unwrapping,” Radio Sci. 23(4), 713–720 (1988).
[Crossref]

Göröcs, Z.

Gorthi, R. K. S. S.

G. E. Spoorthi, S. Gorthi, and R. K. S. S. Gorthi, “PhaseNet: A Deep Convolutional Neural Network for Two-Dimensional Phase Unwrapping,” IEEE Signal Process. Lett. 26(1), 54–58 (2019).
[Crossref]

Gorthi, S.

G. E. Spoorthi, S. Gorthi, and R. K. S. S. Gorthi, “PhaseNet: A Deep Convolutional Neural Network for Two-Dimensional Phase Unwrapping,” IEEE Signal Process. Lett. 26(1), 54–58 (2019).
[Crossref]

Günaydin, H.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7(2), 17141 (2018).
[Crossref] [PubMed]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017).
[Crossref]

Haltmeier, M.

S. Antholzer, M. Haltmeier, and J. Schwab, “Deep learning for photoacoustic tomography from sparse data,” Inverse Probl. Sci. Eng. 27(7), 987–1005 (2018).
[PubMed]

He, K.

C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
[Crossref] [PubMed]

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition (IEEE, 2016), pp. 770–778.

He, X.

W. Yu, X. Tian, X. He, X. Song, L. Xue, C. Liu, and S. Wang, “Real time quantitative phase microscopy based on single-shot transport of intensity equation (ssTIE) method,” Appl. Phys. Lett. 109(7), 071112 (2016).
[Crossref]

Hinton, G. E.

G. E. Hinton, S. Osindero, and Y. W. Teh, “A fast learning algorithm for deep belief nets,” Neural Comput. 18(7), 1527–1554 (2006).
[Crossref] [PubMed]

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323(6088), 533–536 (1986).
[Crossref]

V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of the 27th International Conference on Machine Learning (ICML-10, 2010) pp. 807–814.

Horisaki, R.

Hoshikawa, N.

Huang, L.

Hudgin, R. H.

Huntley, J. M.

Huszar, F.

W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1874–1883.
[Crossref]

Ioffe, S.

C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition (IEEE, 2016), pp. 2818–2826.
[Crossref]

Ito, T.

Jia, Y.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition (IEEE, 2015), pp. 1–9.

Jingshan, Z.

Kakue, T.

Kamilov, U. S.

Kemao, Q.

M. Zhao, L. Huang, Q. Zhang, X. Su, A. Asundi, and Q. Kemao, “Quality-guided phase unwrapping technique: comparison of quality maps and guiding strategies,” Appl. Opt. 50(33), 6214–6224 (2011).
[Crossref] [PubMed]

Q. Kemao, “Two-dimensional windowed Fourier transform for fringe pattern analysis: Principles, applications and implementations,” Opt. Lasers Eng. 45(2), 304–317 (2007).
[Crossref]

Kim, J.

J. Kim, J. Kwon Lee, and K. Mu Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition (IEEE, 2016), pp. 1646–1654.
[Crossref]

Kwon Lee, J.

J. Kim, J. Kwon Lee, and K. Mu Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition (IEEE, 2016), pp. 1646–1654.
[Crossref]

Lam, E. Y.

Lee, J.

Li, G.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref] [PubMed]

Li, S.

Li, Y.

Liang, D.

S. Wang, Z. Su, L. Ying, X. Peng, S. Zhu, F. Liang, D. Feng, and D. Liang, “Accelerating magnetic resonance imaging via deep learning,” in Proceedings of IEEE International Symposium on Biomedical Imaging (IEEE, 2016) pp. 514–517.
[Crossref]

Liang, F.

S. Wang, Z. Su, L. Ying, X. Peng, S. Zhu, F. Liang, D. Feng, and D. Liang, “Accelerating magnetic resonance imaging via deep learning,” in Proceedings of IEEE International Symposium on Biomedical Imaging (IEEE, 2016) pp. 514–517.
[Crossref]

Liu, C.

W. Yu, X. Tian, X. He, X. Song, L. Xue, C. Liu, and S. Wang, “Real time quantitative phase microscopy based on single-shot transport of intensity equation (ssTIE) method,” Appl. Phys. Lett. 109(7), 071112 (2016).
[Crossref]

Liu, W.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition (IEEE, 2015), pp. 1–9.

Loy, C. C.

C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
[Crossref] [PubMed]

Luo, Y.

Lyu, M.

Ma, C.

Manninen, A.

McCann, M. T.

M. T. McCann, E. Froustey, M. Unser, M. Unser, M. Unser, and Kyong Hwan Jin, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26(9), 4509–4522 (2017).
[Crossref] [PubMed]

McClulloch, W. S.

W. S. McClulloch and W. Pitts, “A logical calculus of the ideas immanent in neurons activity,” Bull. Math. Biophys. 5(4), 115–133 (1943).
[Crossref]

Milner, T. E.

W. Schwartzkopf, T. E. Milner, J. Ghosh, B. L. Evans, and A. C. Bovik, “Two-Dimensional Phase Unwrapping Using Neural Networks,” in Proceedings of IEEE Southwest Symposium on Image Analysis and Interpretation (IEEE, 2000) pp. 274–277.
[Crossref]

Moon-Ho Song, S.

S. Moon-Ho Song, S. Napel, N. J. Pelc, and G. H. Glover, “Phase unwrapping of MR phase images using Poisson equation,” IEEE Trans. Image Process. 4(5), 667–676 (1995).
[Crossref] [PubMed]

Mu Lee, K.

J. Kim, J. Kwon Lee, and K. Mu Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition (IEEE, 2016), pp. 1646–1654.
[Crossref]

Nair, V.

V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of the 27th International Conference on Machine Learning (ICML-10, 2010) pp. 807–814.

Napel, S.

S. Moon-Ho Song, S. Napel, N. J. Pelc, and G. H. Glover, “Phase unwrapping of MR phase images using Poisson equation,” IEEE Trans. Image Process. 4(5), 667–676 (1995).
[Crossref] [PubMed]

Naughton, T. J.

Nishitsuji, T.

Nugent, K. A.

Osindero, S.

G. E. Hinton, S. Osindero, and Y. W. Teh, “A fast learning algorithm for deep belief nets,” Neural Comput. 18(7), 1527–1554 (2006).
[Crossref] [PubMed]

Ozcan, A.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7(2), 17141 (2018).
[Crossref] [PubMed]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017).
[Crossref]

Paganin, D.

Pelc, N. J.

S. Moon-Ho Song, S. Napel, N. J. Pelc, and G. H. Glover, “Phase unwrapping of MR phase images using Poisson equation,” IEEE Trans. Image Process. 4(5), 667–676 (1995).
[Crossref] [PubMed]

Peng, X.

S. Wang, Z. Su, L. Ying, X. Peng, S. Zhu, F. Liang, D. Feng, and D. Liang, “Accelerating magnetic resonance imaging via deep learning,” in Proceedings of IEEE International Symposium on Biomedical Imaging (IEEE, 2016) pp. 514–517.
[Crossref]

Pitkäaho, T.

Pitts, W.

W. S. McClulloch and W. Pitts, “A logical calculus of the ideas immanent in neurons activity,” Bull. Math. Biophys. 5(4), 115–133 (1943).
[Crossref]

Pritt, M. D.

M. D. Pritt and J. S. Shipman, “Least-Squares Two-Dimensional Phase Unwrapping Using Fft’s,” IEEE Trans. Geosci. Remote Sens. 32(3), 706–708 (1994).
[Crossref]

M. D. Pritt and D. C. Ghiglia, Two-dimensional phase unwrapping: theory, algorithms, and software (Wiley, 1998).

Pu, Y.

Qu, W.

Rabinovich, A.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition (IEEE, 2015), pp. 1–9.

Reed, S.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition (IEEE, 2015), pp. 1–9.

Reid, G. T.

D. W. Robinson, G. T. Reid, and P. de Groot, “Interferogram Analysis: Digital Fringe Pattern Measurement Techniques,” Phys. Today 47(8), 66 (1994).
[Crossref]

Ren, S.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition (IEEE, 2016), pp. 770–778.

Ren, Z.

Rivenson, Y.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7(2), 17141 (2018).
[Crossref] [PubMed]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017).
[Crossref]

Roberts, A.

Robinson, D. W.

D. W. Robinson, G. T. Reid, and P. de Groot, “Interferogram Analysis: Digital Fringe Pattern Measurement Techniques,” Phys. Today 47(8), 66 (1994).
[Crossref]

Ronneberger, O.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015) pp. 234–241.
[Crossref]

Rosenblatt, F.

F. Rosenblatt, “The perceptron: a probabilistic model for information storage and organization in the brain,” Psychol. Rev. 65(6), 386–408 (1958).
[Crossref] [PubMed]

Rueckert, D.

W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1874–1883.
[Crossref]

Rumelhart, D. E.

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323(6088), 533–536 (1986).
[Crossref]

Saldner, H.

Schwab, J.

S. Antholzer, M. Haltmeier, and J. Schwab, “Deep learning for photoacoustic tomography from sparse data,” Inverse Probl. Sci. Eng. 27(7), 987–1005 (2018).
[PubMed]

Schwartzkopf, W.

W. Schwartzkopf, T. E. Milner, J. Ghosh, B. L. Evans, and A. C. Bovik, “Two-Dimensional Phase Unwrapping Using Neural Networks,” in Proceedings of IEEE Southwest Symposium on Image Analysis and Interpretation (IEEE, 2000) pp. 274–277.
[Crossref]

Sermanet, P.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition (IEEE, 2015), pp. 1–9.

Shaked, N. T.

G. Dardikman and N. T. Shaked, “Phase Unwrapping Using Residual Neural Networks,” in Imaging and Applied Optics, OSA Technical Digest (Optical Society of America, 2018), paper CW3B.5.

Sheikh, H. R.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref] [PubMed]

Shi, W.

W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1874–1883.
[Crossref]

Shimobaba, T.

Shipman, J. S.

M. D. Pritt and J. S. Shipman, “Least-Squares Two-Dimensional Phase Unwrapping Using Fft’s,” IEEE Trans. Geosci. Remote Sens. 32(3), 706–708 (1994).
[Crossref]

Shiraki, A.

Shlens, J.

C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition (IEEE, 2016), pp. 2818–2826.
[Crossref]

Simoncelli, E. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref] [PubMed]

Sinha, A.

Situ, G.

Song, X.

W. Yu, X. Tian, X. He, X. Song, L. Xue, C. Liu, and S. Wang, “Real time quantitative phase microscopy based on single-shot transport of intensity equation (ssTIE) method,” Appl. Phys. Lett. 109(7), 071112 (2016).
[Crossref]

Spoorthi, G. E.

G. E. Spoorthi, S. Gorthi, and R. K. S. S. Gorthi, “PhaseNet: A Deep Convolutional Neural Network for Two-Dimensional Phase Unwrapping,” IEEE Signal Process. Lett. 26(1), 54–58 (2019).
[Crossref]

Su, X.

Su, Z.

S. Wang, Z. Su, L. Ying, X. Peng, S. Zhu, F. Liang, D. Feng, and D. Liang, “Accelerating magnetic resonance imaging via deep learning,” in Proceedings of IEEE International Symposium on Biomedical Imaging (IEEE, 2016) pp. 514–517.
[Crossref]

Sun, J.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition (IEEE, 2016), pp. 770–778.

Sun, Y.

Szegedy, C.

C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition (IEEE, 2016), pp. 2818–2826.
[Crossref]

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition (IEEE, 2015), pp. 1–9.

Takagi, R.

Takahashi, T.

Tang, X.

C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
[Crossref] [PubMed]

Tanida, J.

Teague, M. R.

Teh, Y. W.

G. E. Hinton, S. Osindero, and Y. W. Teh, “A fast learning algorithm for deep belief nets,” Neural Comput. 18(7), 1527–1554 (2006).
[Crossref] [PubMed]

Teng, D.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7(2), 17141 (2018).
[Crossref] [PubMed]

Tian, L.

Tian, X.

W. Yu, X. Tian, X. He, X. Song, L. Xue, C. Liu, and S. Wang, “Real time quantitative phase microscopy based on single-shot transport of intensity equation (ssTIE) method,” Appl. Phys. Lett. 109(7), 071112 (2016).
[Crossref]

Totz, J.

W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1874–1883.
[Crossref]

Unser, M.

M. T. McCann, E. Froustey, M. Unser, M. Unser, M. Unser, and Kyong Hwan Jin, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26(9), 4509–4522 (2017).
[Crossref] [PubMed]

M. T. McCann, E. Froustey, M. Unser, M. Unser, M. Unser, and Kyong Hwan Jin, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26(9), 4509–4522 (2017).
[Crossref] [PubMed]

M. T. McCann, E. Froustey, M. Unser, M. Unser, M. Unser, and Kyong Hwan Jin, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26(9), 4509–4522 (2017).
[Crossref] [PubMed]

Vanhoucke, V.

C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition (IEEE, 2016), pp. 2818–2826.
[Crossref]

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition (IEEE, 2015), pp. 1–9.

Waller, L.

Wang, H.

H. Wang, M. Lyu, and G. Situ, “eHoloNet: a learning-based end-to-end approach for in-line digital holographic reconstruction,” Opt. Express 26(18), 22603–22614 (2018).
[Crossref] [PubMed]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017).
[Crossref]

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref] [PubMed]

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref] [PubMed]

Wang, K.

Wang, P.

Wang, S.

W. Yu, X. Tian, X. He, X. Song, L. Xue, C. Liu, and S. Wang, “Real time quantitative phase microscopy based on single-shot transport of intensity equation (ssTIE) method,” Appl. Phys. Lett. 109(7), 071112 (2016).
[Crossref]

S. Wang, Z. Su, L. Ying, X. Peng, S. Zhu, F. Liang, D. Feng, and D. Liang, “Accelerating magnetic resonance imaging via deep learning,” in Proceedings of IEEE International Symposium on Biomedical Imaging (IEEE, 2016) pp. 514–517.
[Crossref]

Wang, W.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref] [PubMed]

Wang, Z.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref] [PubMed]

W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1874–1883.
[Crossref]

Werner, C. L.

R. M. Goldstein, H. A. Zebker, and C. L. Werner, “Satellite radar interferometry: two-dimensional phase unwrapping,” Radio Sci. 23(4), 713–720 (1988).
[Crossref]

Williams, R. J.

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323(6088), 533–536 (1986).
[Crossref]

Wojna, Z.

C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition (IEEE, 2016), pp. 2818–2826.
[Crossref]

Xi, T.

Xia, Z.

Xu, Z.

Xue, L.

W. Yu, X. Tian, X. He, X. Song, L. Xue, C. Liu, and S. Wang, “Real time quantitative phase microscopy based on single-shot transport of intensity equation (ssTIE) method,” Appl. Phys. Lett. 109(7), 071112 (2016).
[Crossref]

Yamamoto, Y.

Yang, S. Y.

Ying, L.

S. Wang, Z. Su, L. Ying, X. Peng, S. Zhu, F. Liang, D. Feng, and D. Liang, “Accelerating magnetic resonance imaging via deep learning,” in Proceedings of IEEE International Symposium on Biomedical Imaging (IEEE, 2016) pp. 514–517.
[Crossref]

Yu, W.

W. Yu, X. Tian, X. He, X. Song, L. Xue, C. Liu, and S. Wang, “Real time quantitative phase microscopy based on single-shot transport of intensity equation (ssTIE) method,” Appl. Phys. Lett. 109(7), 071112 (2016).
[Crossref]

Yu, Y.

Yuan, X.

Zebker, H. A.

R. M. Goldstein, H. A. Zebker, and C. L. Werner, “Satellite radar interferometry: two-dimensional phase unwrapping,” Radio Sci. 23(4), 713–720 (1988).
[Crossref]

Zhang, J.

Zhang, Q.

Zhang, X.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition (IEEE, 2016), pp. 770–778.

Zhang, Y.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7(2), 17141 (2018).
[Crossref] [PubMed]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017).
[Crossref]

Zhao, J.

Zhao, M.

Zhong, J.

Zhu, S.

S. Wang, Z. Su, L. Ying, X. Peng, S. Zhu, F. Liang, D. Feng, and D. Liang, “Accelerating magnetic resonance imaging via deep learning,” in Proceedings of IEEE International Symposium on Biomedical Imaging (IEEE, 2016) pp. 514–517.
[Crossref]

Zuo, C.

Appl. Opt. (5)

Appl. Phys. Lett. (1)

W. Yu, X. Tian, X. He, X. Song, L. Xue, C. Liu, and S. Wang, “Real time quantitative phase microscopy based on single-shot transport of intensity equation (ssTIE) method,” Appl. Phys. Lett. 109(7), 071112 (2016).
[Crossref]

Bull. Math. Biophys. (1)

W. S. McClulloch and W. Pitts, “A logical calculus of the ideas immanent in neurons activity,” Bull. Math. Biophys. 5(4), 115–133 (1943).
[Crossref]

IEEE Signal Process. Lett. (1)

G. E. Spoorthi, S. Gorthi, and R. K. S. S. Gorthi, “PhaseNet: A Deep Convolutional Neural Network for Two-Dimensional Phase Unwrapping,” IEEE Signal Process. Lett. 26(1), 54–58 (2019).
[Crossref]

IEEE Trans. Geosci. Remote Sens. (1)

M. D. Pritt and J. S. Shipman, “Least-Squares Two-Dimensional Phase Unwrapping Using Fft’s,” IEEE Trans. Geosci. Remote Sens. 32(3), 706–708 (1994).
[Crossref]

IEEE Trans. Image Process. (3)

M. T. McCann, E. Froustey, M. Unser, M. Unser, M. Unser, and Kyong Hwan Jin, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26(9), 4509–4522 (2017).
[Crossref] [PubMed]

S. Moon-Ho Song, S. Napel, N. J. Pelc, and G. H. Glover, “Phase unwrapping of MR phase images using Poisson equation,” IEEE Trans. Image Process. 4(5), 667–676 (1995).
[Crossref] [PubMed]

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref] [PubMed]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
[Crossref] [PubMed]

Inverse Probl. Sci. Eng. (1)

S. Antholzer, M. Haltmeier, and J. Schwab, “Deep learning for photoacoustic tomography from sparse data,” Inverse Probl. Sci. Eng. 27(7), 987–1005 (2018).
[PubMed]

J. Opt. Soc. Am. (3)

Light Sci. Appl. (1)

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7(2), 17141 (2018).
[Crossref] [PubMed]

Nature (1)

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323(6088), 533–536 (1986).
[Crossref]

Neural Comput. (1)

G. E. Hinton, S. Osindero, and Y. W. Teh, “A fast learning algorithm for deep belief nets,” Neural Comput. 18(7), 1527–1554 (2006).
[Crossref] [PubMed]

Opt. Express (8)

H. Wang, M. Lyu, and G. Situ, “eHoloNet: a learning-based end-to-end approach for in-line digital holographic reconstruction,” Opt. Express 26(18), 22603–22614 (2018).
[Crossref] [PubMed]

X. Yuan and Y. Pu, “Parallel lensless compressive imaging via deep convolutional neural networks,” Opt. Express 26(2), 1962–1977 (2018).
[Crossref] [PubMed]

R. Horisaki, R. Takagi, and J. Tanida, “Learning-based imaging through scattering media,” Opt. Express 24(13), 13738–13743 (2016).
[Crossref] [PubMed]

Y. Li, J. Di, C. Ma, J. Zhang, J. Zhong, K. Wang, T. Xi, and J. Zhao, “Quantitative phase microscopy for cellular dynamics based on transport of intensity equation,” Opt. Express 26(1), 586–593 (2018).
[Crossref] [PubMed]

C. Zuo, Q. Chen, W. Qu, and A. Asundi, “High-speed transport-of-intensity phase microscopy with an electrically tunable lens,” Opt. Express 21(20), 24060–24075 (2013).
[Crossref] [PubMed]

C. Zuo, Q. Chen, Y. Yu, and A. Asundi, “Transport-of-intensity phase imaging using Savitzky-Golay differentiation filter--theory and applications,” Opt. Express 21(5), 5346–5362 (2013).
[Crossref] [PubMed]

Y. Sun, Z. Xia, and U. S. Kamilov, “Efficient and accurate inversion of multiple scattering with deep learning,” Opt. Express 26(11), 14678–14688 (2018).
[Crossref] [PubMed]

Z. Jingshan, R. A. Claus, J. Dauwels, L. Tian, and L. Waller, “Transport of Intensity phase imaging by intensity spectrum fitting of exponentially spaced defocus planes,” Opt. Express 22(9), 10661–10674 (2014).
[Crossref] [PubMed]

Opt. Lasers Eng. (1)

Q. Kemao, “Two-dimensional windowed Fourier transform for fringe pattern analysis: Principles, applications and implementations,” Opt. Lasers Eng. 45(2), 304–317 (2007).
[Crossref]

Opt. Lett. (3)

Optica (4)

Phys. Today (1)

D. W. Robinson, G. T. Reid, and P. de Groot, “Interferogram Analysis: Digital Fringe Pattern Measurement Techniques,” Phys. Today 47(8), 66 (1994).
[Crossref]

Psychol. Rev. (1)

F. Rosenblatt, “The perceptron: a probabilistic model for information storage and organization in the brain,” Psychol. Rev. 65(6), 386–408 (1958).
[Crossref] [PubMed]

Radio Sci. (1)

R. M. Goldstein, H. A. Zebker, and C. L. Werner, “Satellite radar interferometry: two-dimensional phase unwrapping,” Radio Sci. 23(4), 713–720 (1988).
[Crossref]

Sci. Rep. (1)

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref] [PubMed]

Other (15)

S. Wang, Z. Su, L. Ying, X. Peng, S. Zhu, F. Liang, D. Feng, and D. Liang, “Accelerating magnetic resonance imaging via deep learning,” in Proceedings of IEEE International Symposium on Biomedical Imaging (IEEE, 2016) pp. 514–517.
[Crossref]

W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1874–1883.
[Crossref]

J. Kim, J. Kwon Lee, and K. Mu Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition (IEEE, 2016), pp. 1646–1654.
[Crossref]

V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of the 27th International Conference on Machine Learning (ICML-10, 2010) pp. 807–814.

S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” https://arxiv.org/abs/1502.03167 .

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (Curran Associates, Inc., 2012), pp. 1097–1105.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition (IEEE, 2015), pp. 1–9.

C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition (IEEE, 2016), pp. 2818–2826.
[Crossref]

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition (IEEE, 2016), pp. 770–778.

S. Hochreiter, “Untersuchungen zu dynamischen neuronalen Netzen,” Diploma, Technische Universität München 91(1) (1991).

M. D. Pritt and D. C. Ghiglia, Two-dimensional phase unwrapping: theory, algorithms, and software (Wiley, 1998).

W. Schwartzkopf, T. E. Milner, J. Ghosh, B. L. Evans, and A. C. Bovik, “Two-Dimensional Phase Unwrapping Using Neural Networks,” in Proceedings of IEEE Southwest Symposium on Image Analysis and Interpretation (IEEE, 2000) pp. 274–277.
[Crossref]

G. Dardikman and N. T. Shaked, “Phase Unwrapping Using Residual Neural Networks,” in Imaging and Applied Optics, OSA Technical Digest (Optical Society of America, 2018), paper CW3B.5.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” https://arxiv.org/abs/1412.6980 .

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015) pp. 234–241.
[Crossref]

Supplementary Material (4)

NameDescription
» Visualization 1       A part of the training image set This visualization shows 1000 examples of 37,500 simulative image set. The left is for real phase images and right for corresponding wrapped phase images.
» Visualization 2       Anti-noise performance testing results
» Visualization 3       Anti-aliasing performance testing results This visualization demonstrates that our approach has an excellent anti-aliasing performance. The five images of the video from top to bottom and left to right correspond to the wrapped phase, real phase, ali
» Visualization 4       Unwrapping results of dynamic candle flame

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 Four examples of the random square matrices and the corresponding generated real phases. (a) 2 × 2 matrix to real phase, (b) 3 × 3 matrix to real phase, (c) 5 × 5 matrix to real phase, (d) 10 × 10 matrix to real phase.
Fig. 2
Fig. 2 Training and testing of the network. (a) Network-training flow diagram. The orange part is the data-preparation stage, and the blue part is the training stage. (b) Network-testing flow diagram. After training, the trained network is fed a wrapped phase image which is not included in the training image set and rapidly outputs the corresponding unwrapped phase image.
Fig. 3
Fig. 3 Detailed schematic of the convolution artificial neural network architecture. Each blue box corresponds to a multi-channel feature map. The number of channels is provided on top of the box. The x-y-size is denoted at the lower left edge of the box. White-dotted boxes represent copied feature maps. The arrows and symbol denote the different operations.
Fig. 4
Fig. 4 CNN result of an example taken from the testing image set. Upper row: the wrapped, real and CNN-output unwrapped phase images; Lower row: the comparison of phase height across the central lines indicated in real phase (red lines) and CNN-output unwrapped phase (blue lines).
Fig. 5
Fig. 5 SSIM indices (solid lines) of CNN, LS, QG, WFT-LS and WFT-QG results with the ground truth, SNR (dotted lines) of the noisy wrapped images while the level of the noises increasing from 0.01 to 0.40 (standard-deviations of Gaussian and multiplicative noises from 0.01 to 0.40, and density of salt & pepper noise from 0.01 to 0.40). The wrapped phase, real phase, the error maps of the CNN results (in green box), the error maps of the LS results (in blue box), the error maps of the QG results (in cyan box), the error maps of the WFT-LS results (in purple box) and the error maps of the WFT-QG results (in orange box) of the example are shown in the lower part from 0.05 to 0.40 with an interval of 0.05. Also see Supplementary Visualization 2 for a detailed comparison.
Fig. 6
Fig. 6 SSIM indices (solid lines) of the CNN, LS and QG results with the ground truth, aliasing pixel percentage (dotted lines) of the real phase image and incorrect pixel percentage (dotted lines) of the three methods’ results while the height of the real phase image increasing from 5 (~1.6π) to 100 (~31.8π). The wrapped phase, real phase, aliasing maps (in magenta box), the error maps of the CNN results (in green box), the error maps of the LS results (in blue box) and the error maps of the QG results (in cyan box) of the example are shown in the lower part from 11 to 99 with an interval of 11 units, except for the case that the second column is the results at the phase height of 27 where the aliasing starts to appear. Also see Supplementary Visualization 3 for a detailed comparison.
Fig. 7
Fig. 7 Comparison of the results between CNN, LS and QG methods.
Fig. 8
Fig. 8 Wrapped phase, corresponding unwrapped phase of dynamic candle flame reconstructed by the DLPU and LS methods, and their different maps in different frames within 20s. (I) 1th frame, (II) 63th frame, (III) 152th frame, (IV) 209th frame, (V) 238th frame, (VI) 299th frame, (VII) 374th frame, (VIII) 400th frame. Also see Supplementary Visualization 4 for full results.
Fig. 9
Fig. 9 Anti-aliasing ability test for a flame phase. (I) wrapped phase, (II) Real phase, (III Aliasing map, (IV) 3D display of the DLPU result, (V) 3D display of the LS result, (VI) 3D display of the QG result, (VII) Error map of the DLPU result, (VIII) Error map of the LS result,(IX) Error map of the QG result.
Fig. 10
Fig. 10 Spectrum analysis of training sets. (a) Real phase, wrapped phase and corresponding Fast Fourier Transform (FFT) Spectrogram with same distribution but different height from 5 to 85 radians (I-V). (b) Real phase, wrapped phase and corresponding Fast Fourier Transform (FFT) Spectrogram with same height but different size of random square matrices in 5 × 5, 10 × 10, 15 × 15, 20 × 20, 25 × 25 (I-V).
Fig. 11
Fig. 11 Comparison of the flame wrapped phase and the two most similar phases in the training set. The five convolution layers before max pooling operation from Con_1 to Con_5, in which, the a is from flame wrapped phase, the b and c are from the two most similar phases in the training set.

Tables (3)

Tables Icon

Table 1 SSIM indices of the three methods with Real phase shown in Fig. 9.

Tables Icon

Table 2 The proportions of training set whose SSIM indices with the flame phase shown in Fig. 8 are higher than 0.5.

Tables Icon

Table 3 SSIM indices of the flame phase and the two most similar phases in training set shown in Fig. 11. SSIM 1 is for the most similar phase in the training set. SSIM 2 is for the second similar phase in the training set.

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

ψ(x,y)=angle{ exp[ jφ( x,y ) ] },