Abstract

The phase extraction neural network (PhENN) [Optica 4, 1117 (2017)] is a computational architecture, based on deep machine learning, for lens-less quantitative phase retrieval from raw intensity data. PhENN is a deep convolutional neural network trained through examples consisting of pairs of true phase objects and their corresponding intensity diffraction patterns; thereafter, given a test raw intensity pattern, PhENN is capable of reconstructing the original phase object robustly, in many cases even for objects outside the database where the training examples were drawn from. Here, we show that the spatial frequency content of the training examples is an important factor limiting PhENN’s spatial frequency response. For example, if the training database is relatively sparse in high spatial frequencies, as most natural scenes are, PhENN’s ability to resolve fine spatial features in test patterns will be correspondingly limited. To combat this issue, we propose “flattening” the power spectral density of the training examples before presenting them to PhENN. For phase objects following the statistics of natural scenes, we demonstrate experimentally that the spectral pre-modulation method enhances the spatial resolution of PhENN by a factor of 2.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Lensless computational imaging through deep learning

Ayan Sinha, Justin Lee, Shuai Li, and George Barbastathis
Optica 4(9) 1117-1125 (2017)

On the use of deep learning for computational imaging

George Barbastathis, Aydogan Ozcan, and Guohai Situ
Optica 6(8) 921-943 (2019)

Imaging through glass diffusers using densely connected convolutional networks

Shuai Li, Mo Deng, Justin Lee, Ayan Sinha, and George Barbastathis
Optica 5(7) 803-813 (2018)

References

  • View by:
  • |
  • |
  • |

  1. R. Horisaki, R. Takagi, and J. Tanida, “Learning-based imaging through scattering media,” Opt. Express 24, 13738–13743 (2016).
    [Crossref] [PubMed]
  2. M. Lyu, H. Wang, G. Li, and G. Situ, “Exploit imaging through opaque wall via deep learning,” arXiv preprint ar”Xiv: 1708.07881 (2017).
  3. S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5, 803–813 (2018).
    [Crossref]
  4. K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Transactions on Image Process. 26, 4509–4522 (2017).
    [Crossref]
  5. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
    [Crossref]
  6. Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light. Sci. & Appl. 7, 17141 (2018).
    [Crossref]
  7. Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4, 1437–1443 (2017).
    [Crossref]
  8. M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Reports 7, 17865 (2017).
    [Crossref]
  9. N. Borhani, E. Kakkava, C. Moser, and D. Psaltis, “Learning to see through multimode fibers,” Optica 5, 960–966 (2018).
    [Crossref]
  10. Z. Ren, Z. Xu, and E. Y. Lam, “Learning-based nonparametric autofocusing for digital holography,” Optica 5, 337–344 (2018).
    [Crossref]
  11. T. Nguyen, V. Bui, V. Lam, C. B. Raub, L.-C. Chang, and G. Nehmetallah, “Automatic phase aberration compensation for digital holographic microscopy based on deep learning background detection,” Opt. Express 25, 15043–15057 (2017).
    [Crossref] [PubMed]
  12. S. Jiao, Z. Jin, C. Chang, C. Zhou, W. Zou, and X. Li, “Compression of phase-only holograms with jpeg standard and deep learning,” arXiv preprint ar”Xiv:1806.03811 (2018).
  13. A. Goy, K. Arthur, S. Li, and G. Barbastathis, “Low photon count phase retrieval using deep learning,” arXiv preprint ar”Xiv:1806.10029 (2018).
  14. H. Liao, F. Li, and M. K. Ng, “Selection of regularization parameter in total variation image restoration,” J. Opt. Soc. Am. A 26, 2311–2320 (2009).
  15. M. Mardani, H. Monajemi, V. Papyan, S. Vasanawala, D. Donoho, and J. Pauly, “Recurrent generative adversarial networks for proximal learning and automated compressive image recovery,” arXiv preprint ar,”Xiv: 1711.10046 (2017).
  16. J. W. Goodman and R. Lawrence, “Digital image formation from electronically detected holograms,” Appl. Phys. Lett. 11, 77–79 (1967).
    [Crossref]
  17. Y. Rivenson, A. Stern, and B. Javidi, “Compressive fresnel holography,” J. Disp. Technol. 6, 506–509 (2010).
    [Crossref]
  18. J. H. Milgram and W. Li, “Computational reconstruction of images from holograms,” Appl. Opt. 41, 853–864 (2002).
    [Crossref] [PubMed]
  19. D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, “Compressive holography,” Opt. Express 17, 13040–13049 (2009).
    [Crossref] [PubMed]
  20. L. Williams, G. Nehmetallah, and P. P. Banerjee, “Digital tomographic compressive holographic reconstruction of three-dimensional objects in transmissive and reflective geometries,” Appl. Opt. 52, 1702–1710 (2013).
    [Crossref] [PubMed]
  21. K. Creath, “Phase-shifting speckle interferometry,” Appl. Opt. 24, 3053–3058 (1985).
    [Crossref] [PubMed]
  22. M. R. Teague, “Deterministic phase retrieval: a green’s function solution,” J. Opt. Soc. Am. 73, 1434–1441 (1983).
    [Crossref]
  23. S. S. Kou, L. Waller, G. Barbastathis, and C. J. Sheppard, “Transport-of-intensity approach to differential interference contrast (ti-dic) microscopy for quantitative phase imaging,” Opt. Lett. 35, 447–449 (2010).
    [Crossref] [PubMed]
  24. D. Paganin and K. A. Nugent, “Noninterferometric phase imaging with partially coherent light,” Phys. Rev. Lett. 80, 2586 (1998).
    [Crossref]
  25. J. A. Schmalz, T. E. Gureyev, D. M. Paganin, and K. M. Pavlov, “Phase retrieval using radiation and matter-wave fields: Validity of teague’s method for solution of the transport-of-intensity equation,” Phys. Rev. A 84, 023808 (2011).
    [Crossref]
  26. L. Waller, S. S. Kou, C. J. Sheppard, and G. Barbastathis, “Phase from chromatic aberrations,” Opt. Express 18, 22817–22825 (2010).
    [Crossref] [PubMed]
  27. L. Waller, M. Tsang, S. Ponda, S. Y. Yang, and G. Barbastathis, “Phase and amplitude imaging from noisy images by kalman filtering,” Opt. Express 19, 2805–2815 (2011).
    [Crossref] [PubMed]
  28. L. Tian, J. C. Petruccelli, Q. Miao, H. Kudrolli, V. Nagarkar, and G. Barbastathis, “Compressive x-ray phase tomography based on the transport of intensity equation,” Opt. Lett. 38, 3418–3421 (2013).
    [Crossref] [PubMed]
  29. A. Pan, L. Xu, J. C. Petruccelli, R. Gupta, B. Singh, and G. Barbastathis, “Contrast enhancement in x-ray phase contrast tomography,” Opt. Express 22, 18020–18026 (2014).
    [Crossref] [PubMed]
  30. Y. Zhu, A. Shanker, L. Tian, L. Waller, and G. Barbastathis, “Low-noise phase imaging by hybrid uniform and structured illumination transport of intensity equation,” Opt. Express 22, 26696–26711 (2014).
    [Crossref] [PubMed]
  31. R. W. Gerchberg, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).
  32. J. R. Fienup, “Reconstruction of an object from the modulus of its fourier transform,” Opt. Lett. 3, 27–29 (1978).
    [Crossref] [PubMed]
  33. R. Gonsalves, “Phase retrieval from modulus data,” J. Opt. Soc. Am. 66, 961–964 (1976).
    [Crossref]
  34. J. Fienup and C. Wackerman, “Phase-retrieval stagnation problems and solutions,” J. Opt. Soc. Am. A 3, 1897–1907 (1986).
  35. H. H. Bauschke, P. L. Combettes, and D. R. Luke, “Phase retrieval, error reduction algorithm, and fienup variants: a view from convex optimization,” J. Opt. Soc. Am. A 19, 1334–1345 (2002).
  36. v. A. Van der Schaaf and J. v. van Hateren, “Modelling the power spectra of natural images: statistics and information,” Vis. Res. 36, 2759–2770 (1996).
    [Crossref] [PubMed]
  37. S. Li, A. Sinha, J. Lee, and G. Barbastathis, “Quantitative phase microscopy using deep neural networks,” in Quantitative Phase Imaging IV,vol. 10503 (International Society for Optics and Photonics, 2018), p. 105032D.
  38. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.
  39. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (IEEE, 2016), pp. 770–778.
  40. G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database for studying face recognition in unconstrained environments,” Tech. rep., Technical Report 07-49, University of Massachusetts, Amherst (2007).
  41. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei-Fei, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
    [Crossref]
  42. F. S. Samaria and A. C. Harter, “Parameterisation of a stochastic model for human face identification,” in Proceedings of the Second IEEE Workshop on Applications of Computer Vision, (IEEE, 1994), pp. 138–142.
  43. AT&T Laboratories Cambridge, “AT&T Database of Faces,” (1994). Data retrieved from https://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html .
  44. A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” Tech. rep., University of Toronto (2009).
  45. Y. LeCun, C. Cortes, and C. J. Burges, “Mnist handwritten digit database,” AT&T Labs [Online]. Available: http://yann.lecun.com/exdb/mnist2 (2010).
  46. L1 determines the aperture stop with diameter 25.4mm, i.e. a numerical aperture NA= 12.7/150 = 0.0847. The nominal diffraction-limited resolution should be d0 = λ/(2NA) = 3.74µ m. That calculation is irrelevant to PhENN, since objets of that spatial frequency are never presented to it during training.

2018 (4)

2017 (6)

T. Nguyen, V. Bui, V. Lam, C. B. Raub, L.-C. Chang, and G. Nehmetallah, “Automatic phase aberration compensation for digital holographic microscopy based on deep learning background detection,” Opt. Express 25, 15043–15057 (2017).
[Crossref] [PubMed]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4, 1437–1443 (2017).
[Crossref]

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Reports 7, 17865 (2017).
[Crossref]

K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Transactions on Image Process. 26, 4509–4522 (2017).
[Crossref]

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
[Crossref]

M. Mardani, H. Monajemi, V. Papyan, S. Vasanawala, D. Donoho, and J. Pauly, “Recurrent generative adversarial networks for proximal learning and automated compressive image recovery,” arXiv preprint ar,”Xiv: 1711.10046 (2017).

2016 (1)

2015 (1)

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei-Fei, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

2014 (2)

2013 (2)

2011 (2)

L. Waller, M. Tsang, S. Ponda, S. Y. Yang, and G. Barbastathis, “Phase and amplitude imaging from noisy images by kalman filtering,” Opt. Express 19, 2805–2815 (2011).
[Crossref] [PubMed]

J. A. Schmalz, T. E. Gureyev, D. M. Paganin, and K. M. Pavlov, “Phase retrieval using radiation and matter-wave fields: Validity of teague’s method for solution of the transport-of-intensity equation,” Phys. Rev. A 84, 023808 (2011).
[Crossref]

2010 (3)

2009 (2)

2002 (2)

1998 (1)

D. Paganin and K. A. Nugent, “Noninterferometric phase imaging with partially coherent light,” Phys. Rev. Lett. 80, 2586 (1998).
[Crossref]

1996 (1)

v. A. Van der Schaaf and J. v. van Hateren, “Modelling the power spectra of natural images: statistics and information,” Vis. Res. 36, 2759–2770 (1996).
[Crossref] [PubMed]

1986 (1)

1985 (1)

1983 (1)

1978 (1)

1976 (1)

1972 (1)

R. W. Gerchberg, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

1967 (1)

J. W. Goodman and R. Lawrence, “Digital image formation from electronically detected holograms,” Appl. Phys. Lett. 11, 77–79 (1967).
[Crossref]

Arthur, K.

A. Goy, K. Arthur, S. Li, and G. Barbastathis, “Low photon count phase retrieval using deep learning,” arXiv preprint ar”Xiv:1806.10029 (2018).

Banerjee, P. P.

Barbastathis, G.

S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5, 803–813 (2018).
[Crossref]

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
[Crossref]

A. Pan, L. Xu, J. C. Petruccelli, R. Gupta, B. Singh, and G. Barbastathis, “Contrast enhancement in x-ray phase contrast tomography,” Opt. Express 22, 18020–18026 (2014).
[Crossref] [PubMed]

Y. Zhu, A. Shanker, L. Tian, L. Waller, and G. Barbastathis, “Low-noise phase imaging by hybrid uniform and structured illumination transport of intensity equation,” Opt. Express 22, 26696–26711 (2014).
[Crossref] [PubMed]

L. Tian, J. C. Petruccelli, Q. Miao, H. Kudrolli, V. Nagarkar, and G. Barbastathis, “Compressive x-ray phase tomography based on the transport of intensity equation,” Opt. Lett. 38, 3418–3421 (2013).
[Crossref] [PubMed]

L. Waller, M. Tsang, S. Ponda, S. Y. Yang, and G. Barbastathis, “Phase and amplitude imaging from noisy images by kalman filtering,” Opt. Express 19, 2805–2815 (2011).
[Crossref] [PubMed]

L. Waller, S. S. Kou, C. J. Sheppard, and G. Barbastathis, “Phase from chromatic aberrations,” Opt. Express 18, 22817–22825 (2010).
[Crossref] [PubMed]

S. S. Kou, L. Waller, G. Barbastathis, and C. J. Sheppard, “Transport-of-intensity approach to differential interference contrast (ti-dic) microscopy for quantitative phase imaging,” Opt. Lett. 35, 447–449 (2010).
[Crossref] [PubMed]

A. Goy, K. Arthur, S. Li, and G. Barbastathis, “Low photon count phase retrieval using deep learning,” arXiv preprint ar”Xiv:1806.10029 (2018).

S. Li, A. Sinha, J. Lee, and G. Barbastathis, “Quantitative phase microscopy using deep neural networks,” in Quantitative Phase Imaging IV,vol. 10503 (International Society for Optics and Photonics, 2018), p. 105032D.

Bauschke, H. H.

Berg, A.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei-Fei, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Berg, T.

G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database for studying face recognition in unconstrained environments,” Tech. rep., Technical Report 07-49, University of Massachusetts, Amherst (2007).

Bernstein, M.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei-Fei, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Borhani, N.

Brady, D. J.

Brox, T.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.

Bui, V.

Chang, C.

S. Jiao, Z. Jin, C. Chang, C. Zhou, W. Zou, and X. Li, “Compression of phase-only holograms with jpeg standard and deep learning,” arXiv preprint ar”Xiv:1806.03811 (2018).

Chang, L.-C.

Chen, N.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Reports 7, 17865 (2017).
[Crossref]

Choi, K.

Combettes, P. L.

Creath, K.

Deng, J.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei-Fei, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Deng, M.

Donoho, D.

M. Mardani, H. Monajemi, V. Papyan, S. Vasanawala, D. Donoho, and J. Pauly, “Recurrent generative adversarial networks for proximal learning and automated compressive image recovery,” arXiv preprint ar,”Xiv: 1711.10046 (2017).

Fei-Fei, L.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei-Fei, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Fienup, J.

Fienup, J. R.

Fischer, P.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.

Froustey, E.

K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Transactions on Image Process. 26, 4509–4522 (2017).
[Crossref]

Gerchberg, R. W.

R. W. Gerchberg, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

Gonsalves, R.

Goodman, J. W.

J. W. Goodman and R. Lawrence, “Digital image formation from electronically detected holograms,” Appl. Phys. Lett. 11, 77–79 (1967).
[Crossref]

Göröcs, Z.

Goy, A.

A. Goy, K. Arthur, S. Li, and G. Barbastathis, “Low photon count phase retrieval using deep learning,” arXiv preprint ar”Xiv:1806.10029 (2018).

Günaydin, H.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light. Sci. & Appl. 7, 17141 (2018).
[Crossref]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4, 1437–1443 (2017).
[Crossref]

Gupta, R.

Gureyev, T. E.

J. A. Schmalz, T. E. Gureyev, D. M. Paganin, and K. M. Pavlov, “Phase retrieval using radiation and matter-wave fields: Validity of teague’s method for solution of the transport-of-intensity equation,” Phys. Rev. A 84, 023808 (2011).
[Crossref]

Harter, A. C.

F. S. Samaria and A. C. Harter, “Parameterisation of a stochastic model for human face identification,” in Proceedings of the Second IEEE Workshop on Applications of Computer Vision, (IEEE, 1994), pp. 138–142.

Hateren, J. v. van

v. A. Van der Schaaf and J. v. van Hateren, “Modelling the power spectra of natural images: statistics and information,” Vis. Res. 36, 2759–2770 (1996).
[Crossref] [PubMed]

He, K.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (IEEE, 2016), pp. 770–778.

Hinton, G.

A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” Tech. rep., University of Toronto (2009).

Horisaki, R.

Huang, G. B.

G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database for studying face recognition in unconstrained environments,” Tech. rep., Technical Report 07-49, University of Massachusetts, Amherst (2007).

Huang, Z.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei-Fei, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Javidi, B.

Y. Rivenson, A. Stern, and B. Javidi, “Compressive fresnel holography,” J. Disp. Technol. 6, 506–509 (2010).
[Crossref]

Jiao, S.

S. Jiao, Z. Jin, C. Chang, C. Zhou, W. Zou, and X. Li, “Compression of phase-only holograms with jpeg standard and deep learning,” arXiv preprint ar”Xiv:1806.03811 (2018).

Jin, K. H.

K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Transactions on Image Process. 26, 4509–4522 (2017).
[Crossref]

Jin, Z.

S. Jiao, Z. Jin, C. Chang, C. Zhou, W. Zou, and X. Li, “Compression of phase-only holograms with jpeg standard and deep learning,” arXiv preprint ar”Xiv:1806.03811 (2018).

Kakkava, E.

Karpathy, A.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei-Fei, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Khosla, A.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei-Fei, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Kou, S. S.

Krause, J.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei-Fei, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Krizhevsky, A.

A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” Tech. rep., University of Toronto (2009).

Kudrolli, H.

Lam, E. Y.

Lam, V.

Lawrence, R.

J. W. Goodman and R. Lawrence, “Digital image formation from electronically detected holograms,” Appl. Phys. Lett. 11, 77–79 (1967).
[Crossref]

Learned-Miller, E.

G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database for studying face recognition in unconstrained environments,” Tech. rep., Technical Report 07-49, University of Massachusetts, Amherst (2007).

Lee, J.

S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5, 803–813 (2018).
[Crossref]

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
[Crossref]

S. Li, A. Sinha, J. Lee, and G. Barbastathis, “Quantitative phase microscopy using deep neural networks,” in Quantitative Phase Imaging IV,vol. 10503 (International Society for Optics and Photonics, 2018), p. 105032D.

Li, F.

Li, G.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Reports 7, 17865 (2017).
[Crossref]

M. Lyu, H. Wang, G. Li, and G. Situ, “Exploit imaging through opaque wall via deep learning,” arXiv preprint ar”Xiv: 1708.07881 (2017).

Li, S.

S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5, 803–813 (2018).
[Crossref]

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
[Crossref]

A. Goy, K. Arthur, S. Li, and G. Barbastathis, “Low photon count phase retrieval using deep learning,” arXiv preprint ar”Xiv:1806.10029 (2018).

S. Li, A. Sinha, J. Lee, and G. Barbastathis, “Quantitative phase microscopy using deep neural networks,” in Quantitative Phase Imaging IV,vol. 10503 (International Society for Optics and Photonics, 2018), p. 105032D.

Li, W.

Li, X.

S. Jiao, Z. Jin, C. Chang, C. Zhou, W. Zou, and X. Li, “Compression of phase-only holograms with jpeg standard and deep learning,” arXiv preprint ar”Xiv:1806.03811 (2018).

Liao, H.

Lim, S.

Luke, D. R.

Lyu, M.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Reports 7, 17865 (2017).
[Crossref]

M. Lyu, H. Wang, G. Li, and G. Situ, “Exploit imaging through opaque wall via deep learning,” arXiv preprint ar”Xiv: 1708.07881 (2017).

Ma, S.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei-Fei, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Mardani, M.

M. Mardani, H. Monajemi, V. Papyan, S. Vasanawala, D. Donoho, and J. Pauly, “Recurrent generative adversarial networks for proximal learning and automated compressive image recovery,” arXiv preprint ar,”Xiv: 1711.10046 (2017).

Marks, D. L.

McCann, M. T.

K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Transactions on Image Process. 26, 4509–4522 (2017).
[Crossref]

Miao, Q.

Milgram, J. H.

Monajemi, H.

M. Mardani, H. Monajemi, V. Papyan, S. Vasanawala, D. Donoho, and J. Pauly, “Recurrent generative adversarial networks for proximal learning and automated compressive image recovery,” arXiv preprint ar,”Xiv: 1711.10046 (2017).

Moser, C.

Nagarkar, V.

Nehmetallah, G.

Ng, M. K.

Nguyen, T.

Nugent, K. A.

D. Paganin and K. A. Nugent, “Noninterferometric phase imaging with partially coherent light,” Phys. Rev. Lett. 80, 2586 (1998).
[Crossref]

Ozcan, A.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light. Sci. & Appl. 7, 17141 (2018).
[Crossref]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4, 1437–1443 (2017).
[Crossref]

Paganin, D.

D. Paganin and K. A. Nugent, “Noninterferometric phase imaging with partially coherent light,” Phys. Rev. Lett. 80, 2586 (1998).
[Crossref]

Paganin, D. M.

J. A. Schmalz, T. E. Gureyev, D. M. Paganin, and K. M. Pavlov, “Phase retrieval using radiation and matter-wave fields: Validity of teague’s method for solution of the transport-of-intensity equation,” Phys. Rev. A 84, 023808 (2011).
[Crossref]

Pan, A.

Papyan, V.

M. Mardani, H. Monajemi, V. Papyan, S. Vasanawala, D. Donoho, and J. Pauly, “Recurrent generative adversarial networks for proximal learning and automated compressive image recovery,” arXiv preprint ar,”Xiv: 1711.10046 (2017).

Pauly, J.

M. Mardani, H. Monajemi, V. Papyan, S. Vasanawala, D. Donoho, and J. Pauly, “Recurrent generative adversarial networks for proximal learning and automated compressive image recovery,” arXiv preprint ar,”Xiv: 1711.10046 (2017).

Pavlov, K. M.

J. A. Schmalz, T. E. Gureyev, D. M. Paganin, and K. M. Pavlov, “Phase retrieval using radiation and matter-wave fields: Validity of teague’s method for solution of the transport-of-intensity equation,” Phys. Rev. A 84, 023808 (2011).
[Crossref]

Petruccelli, J. C.

Ponda, S.

Psaltis, D.

Ramesh, M.

G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database for studying face recognition in unconstrained environments,” Tech. rep., Technical Report 07-49, University of Massachusetts, Amherst (2007).

Raub, C. B.

Ren, S.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (IEEE, 2016), pp. 770–778.

Ren, Z.

Rivenson, Y.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light. Sci. & Appl. 7, 17141 (2018).
[Crossref]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4, 1437–1443 (2017).
[Crossref]

Y. Rivenson, A. Stern, and B. Javidi, “Compressive fresnel holography,” J. Disp. Technol. 6, 506–509 (2010).
[Crossref]

Ronneberger, O.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.

Russakovsky, O.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei-Fei, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Samaria, F. S.

F. S. Samaria and A. C. Harter, “Parameterisation of a stochastic model for human face identification,” in Proceedings of the Second IEEE Workshop on Applications of Computer Vision, (IEEE, 1994), pp. 138–142.

Satheesh, S.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei-Fei, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Schaaf, v. A. Van der

v. A. Van der Schaaf and J. v. van Hateren, “Modelling the power spectra of natural images: statistics and information,” Vis. Res. 36, 2759–2770 (1996).
[Crossref] [PubMed]

Schmalz, J. A.

J. A. Schmalz, T. E. Gureyev, D. M. Paganin, and K. M. Pavlov, “Phase retrieval using radiation and matter-wave fields: Validity of teague’s method for solution of the transport-of-intensity equation,” Phys. Rev. A 84, 023808 (2011).
[Crossref]

Shanker, A.

Sheppard, C. J.

Singh, B.

Sinha, A.

S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5, 803–813 (2018).
[Crossref]

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
[Crossref]

S. Li, A. Sinha, J. Lee, and G. Barbastathis, “Quantitative phase microscopy using deep neural networks,” in Quantitative Phase Imaging IV,vol. 10503 (International Society for Optics and Photonics, 2018), p. 105032D.

Situ, G.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Reports 7, 17865 (2017).
[Crossref]

M. Lyu, H. Wang, G. Li, and G. Situ, “Exploit imaging through opaque wall via deep learning,” arXiv preprint ar”Xiv: 1708.07881 (2017).

Stern, A.

Y. Rivenson, A. Stern, and B. Javidi, “Compressive fresnel holography,” J. Disp. Technol. 6, 506–509 (2010).
[Crossref]

Su, H.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei-Fei, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Sun, J.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (IEEE, 2016), pp. 770–778.

Takagi, R.

Tanida, J.

Teague, M. R.

Teng, D.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light. Sci. & Appl. 7, 17141 (2018).
[Crossref]

Tian, L.

Tsang, M.

Unser, M.

K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Transactions on Image Process. 26, 4509–4522 (2017).
[Crossref]

Vasanawala, S.

M. Mardani, H. Monajemi, V. Papyan, S. Vasanawala, D. Donoho, and J. Pauly, “Recurrent generative adversarial networks for proximal learning and automated compressive image recovery,” arXiv preprint ar,”Xiv: 1711.10046 (2017).

Wackerman, C.

Waller, L.

Wang, H.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Reports 7, 17865 (2017).
[Crossref]

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Reports 7, 17865 (2017).
[Crossref]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4, 1437–1443 (2017).
[Crossref]

M. Lyu, H. Wang, G. Li, and G. Situ, “Exploit imaging through opaque wall via deep learning,” arXiv preprint ar”Xiv: 1708.07881 (2017).

Wang, W.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Reports 7, 17865 (2017).
[Crossref]

Williams, L.

Xu, L.

Xu, Z.

Yang, S. Y.

Zhang, X.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (IEEE, 2016), pp. 770–778.

Zhang, Y.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light. Sci. & Appl. 7, 17141 (2018).
[Crossref]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4, 1437–1443 (2017).
[Crossref]

Zhou, C.

S. Jiao, Z. Jin, C. Chang, C. Zhou, W. Zou, and X. Li, “Compression of phase-only holograms with jpeg standard and deep learning,” arXiv preprint ar”Xiv:1806.03811 (2018).

Zhu, Y.

Zou, W.

S. Jiao, Z. Jin, C. Chang, C. Zhou, W. Zou, and X. Li, “Compression of phase-only holograms with jpeg standard and deep learning,” arXiv preprint ar”Xiv:1806.03811 (2018).

Appl. Opt. (3)

Appl. Phys. Lett. (1)

J. W. Goodman and R. Lawrence, “Digital image formation from electronically detected holograms,” Appl. Phys. Lett. 11, 77–79 (1967).
[Crossref]

IEEE Transactions on Image Process. (1)

K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Transactions on Image Process. 26, 4509–4522 (2017).
[Crossref]

Int. J. Comput. Vis. (1)

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei-Fei, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

J. Disp. Technol. (1)

Y. Rivenson, A. Stern, and B. Javidi, “Compressive fresnel holography,” J. Disp. Technol. 6, 506–509 (2010).
[Crossref]

J. Opt. Soc. Am. (2)

J. Opt. Soc. Am. A (3)

Light. Sci. & Appl. (1)

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light. Sci. & Appl. 7, 17141 (2018).
[Crossref]

Opt. Express (7)

Opt. Lett. (3)

Optica (5)

Optik (1)

R. W. Gerchberg, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

Phys. Rev. A (1)

J. A. Schmalz, T. E. Gureyev, D. M. Paganin, and K. M. Pavlov, “Phase retrieval using radiation and matter-wave fields: Validity of teague’s method for solution of the transport-of-intensity equation,” Phys. Rev. A 84, 023808 (2011).
[Crossref]

Phys. Rev. Lett. (1)

D. Paganin and K. A. Nugent, “Noninterferometric phase imaging with partially coherent light,” Phys. Rev. Lett. 80, 2586 (1998).
[Crossref]

Sci. Reports (1)

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Reports 7, 17865 (2017).
[Crossref]

Vis. Res. (1)

v. A. Van der Schaaf and J. v. van Hateren, “Modelling the power spectra of natural images: statistics and information,” Vis. Res. 36, 2759–2770 (1996).
[Crossref] [PubMed]

Xiv (1)

M. Mardani, H. Monajemi, V. Papyan, S. Vasanawala, D. Donoho, and J. Pauly, “Recurrent generative adversarial networks for proximal learning and automated compressive image recovery,” arXiv preprint ar,”Xiv: 1711.10046 (2017).

Other (12)

S. Jiao, Z. Jin, C. Chang, C. Zhou, W. Zou, and X. Li, “Compression of phase-only holograms with jpeg standard and deep learning,” arXiv preprint ar”Xiv:1806.03811 (2018).

A. Goy, K. Arthur, S. Li, and G. Barbastathis, “Low photon count phase retrieval using deep learning,” arXiv preprint ar”Xiv:1806.10029 (2018).

M. Lyu, H. Wang, G. Li, and G. Situ, “Exploit imaging through opaque wall via deep learning,” arXiv preprint ar”Xiv: 1708.07881 (2017).

S. Li, A. Sinha, J. Lee, and G. Barbastathis, “Quantitative phase microscopy using deep neural networks,” in Quantitative Phase Imaging IV,vol. 10503 (International Society for Optics and Photonics, 2018), p. 105032D.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (IEEE, 2016), pp. 770–778.

G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database for studying face recognition in unconstrained environments,” Tech. rep., Technical Report 07-49, University of Massachusetts, Amherst (2007).

F. S. Samaria and A. C. Harter, “Parameterisation of a stochastic model for human face identification,” in Proceedings of the Second IEEE Workshop on Applications of Computer Vision, (IEEE, 1994), pp. 138–142.

AT&T Laboratories Cambridge, “AT&T Database of Faces,” (1994). Data retrieved from https://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html .

A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” Tech. rep., University of Toronto (2009).

Y. LeCun, C. Cortes, and C. J. Burges, “Mnist handwritten digit database,” AT&T Labs [Online]. Available: http://yann.lecun.com/exdb/mnist2 (2010).

L1 determines the aperture stop with diameter 25.4mm, i.e. a numerical aperture NA= 12.7/150 = 0.0847. The nominal diffraction-limited resolution should be d0 = λ/(2NA) = 3.74µ m. That calculation is irrelevant to PhENN, since objets of that spatial frequency are never presented to it during training.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Optical configuration. SF: spatial filter; CL: collimating lens; P: linear polarizer; A: analyzer; SLM: spatial light modulator; L1 and L2: plano-convex lenses; F: focal plane of L2.
Fig. 2
Fig. 2 Phase extraction neural network (PhENN) architecture.
Fig. 3
Fig. 3 Calibration process. (a) Cumulative distribution function (CDF) of the ground truth. (b) Cumulative distribution function (CDF) of the PhENN output. (c) Linear curve fitting.
Fig. 4
Fig. 4 Reconstruction results of PhENN trained with ImageNet. (a) Ground truth for the phase objects. (b) Diffraction patterns captured by the CMOS (after background subtraction and normalization). (c) PhENN output. (d) PhENN reconstruction after the calibration shown in Section 2.3. Columns (i-vi) correspond to the dataset from which the test image is drawn: (i) Faces-LFW [40], (ii) ImageNet [41], (iii) Characters, (iv) MNIST Digits [45], (v) Faces-ATT [42, 43], or (vi) CIFAR [44], respectively.
Fig. 5
Fig. 5 Resolution test for PhENN trained with ImageNet. (a) Dot pattern for resolution test. (b) PhENN reconstructions for dot pattern with D = 3 pixels. (c) PhENN reconstructions for dot pattern with D = 5 pixels. (d) PhENN reconstructions for dot pattern with D = 6 pixels. (e) 1D cross-sections along the lines indicated by red arrows in (b)–(d).
Fig. 6
Fig. 6 Spectral analysis of the ImageNet database. (a& b) 2D normalized power spectral density (PSD) of the ImageNet database in linear and logarithmic scale. (c& d) 1D cross-sections along the spatial frequency u of (a& b), respectively.
Fig. 7
Fig. 7 Spectral pre-modulation. (a) Original image [41]. (b) Modulated image. (c) Fourier spectrum of the original image. (d) Fourier spectrum of the modulated image.
Fig. 8
Fig. 8 Resolution test for PhENN trained with examples from the ImageNet database with spectral pre-modulation according to Eq. (8). (a) Dot pattern for resolution test. (b) PhENN reconstructions for dot pattern with D = 2 pixels. (c) PhENN reconstructions for dot pattern with D = 3 pixels. (d) PhENN reconstructions for dot pattern with D = 6 pixels. (e) 1D cross-sections along the lines indicated by red arrows in (b)–(d).
Fig. 9
Fig. 9 Resolution enhancement demonstration. (a) Ground truth for a phase object [41]. (b) Diffraction pattern captured by the CMOS (after background subtraction and normalization). (c) Phase reconstruction by PhENN trained with ImageNet examples. (d) Phase reconstruction by PhENN trained with ImageNet examples that were spectrally pre-modulated according to Eq. (8).
Fig. 10
Fig. 10 Spectral post-modulation. (a) Output of PhENN trained with ImageNet. The same as Fig. 9 (c). (b) Modulated output.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

f ^ = arg min f { H f g 2 + α Φ ( f ) }
f ^ = DNN ( g ) ,
= k NPCC ( f k , f ^ k ) , where
NPCC ( f k , f ^ k ) ( 1 ) × i , j ( f k ( i , j ) f k ) ( f ^ k ( i , j ) f ^ k ) i , j ( f ( i , j ) f k ) 2 i , j ( f ^ ( i , j ) f ^ k ) 2 ;
NPCC ( ψ , a ψ + b ) = 1 .
S ( u , v ) ( u 2 + v 2 ) 2 = 1 u 2 + v 2 .
G ( u , v ) = u 2 + v 2 .
F e ( u , v ) = G ( u , v ) F ( u , v )
F ^ e ( u , v ) = G ( u , v ) F ^ ( u , v )

Metrics